00:00:00.000 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 221 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3722 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.047 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.048 The recommended git tool is: git 00:00:00.048 using credential 00000000-0000-0000-0000-000000000002 00:00:00.049 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.071 Fetching changes from the remote Git repository 00:00:00.072 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.102 Using shallow fetch with depth 1 00:00:00.102 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.102 > git --version # timeout=10 00:00:00.151 > git --version # 'git version 2.39.2' 00:00:00.151 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.201 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.201 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.249 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.259 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.271 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.271 > git config core.sparsecheckout # timeout=10 00:00:03.280 > git read-tree -mu HEAD # timeout=10 00:00:03.298 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.312 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.312 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.387 [Pipeline] Start of Pipeline 00:00:03.400 [Pipeline] library 00:00:03.402 Loading library shm_lib@master 00:00:03.402 Library shm_lib@master is cached. Copying from home. 00:00:03.413 [Pipeline] node 00:00:03.425 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.426 [Pipeline] { 00:00:03.434 [Pipeline] catchError 00:00:03.436 [Pipeline] { 00:00:03.444 [Pipeline] wrap 00:00:03.451 [Pipeline] { 00:00:03.458 [Pipeline] stage 00:00:03.460 [Pipeline] { (Prologue) 00:00:03.476 [Pipeline] echo 00:00:03.478 Node: VM-host-WFP7 00:00:03.484 [Pipeline] cleanWs 00:00:03.494 [WS-CLEANUP] Deleting project workspace... 00:00:03.494 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.501 [WS-CLEANUP] done 00:00:03.691 [Pipeline] setCustomBuildProperty 00:00:03.768 [Pipeline] httpRequest 00:00:04.080 [Pipeline] echo 00:00:04.081 Sorcerer 10.211.164.20 is alive 00:00:04.088 [Pipeline] retry 00:00:04.090 [Pipeline] { 00:00:04.100 [Pipeline] httpRequest 00:00:04.104 HttpMethod: GET 00:00:04.104 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.105 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.106 Response Code: HTTP/1.1 200 OK 00:00:04.107 Success: Status code 200 is in the accepted range: 200,404 00:00:04.107 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.253 [Pipeline] } 00:00:04.311 [Pipeline] // retry 00:00:04.318 [Pipeline] sh 00:00:04.600 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.616 [Pipeline] httpRequest 00:00:04.943 [Pipeline] echo 00:00:04.944 Sorcerer 10.211.164.20 is alive 00:00:04.951 [Pipeline] retry 00:00:04.952 [Pipeline] { 00:00:04.962 [Pipeline] httpRequest 00:00:04.967 HttpMethod: GET 00:00:04.968 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:04.968 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:04.970 Response Code: HTTP/1.1 200 OK 00:00:04.970 Success: Status code 200 is in the accepted range: 200,404 00:00:04.971 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:28.410 [Pipeline] } 00:00:28.427 [Pipeline] // retry 00:00:28.433 [Pipeline] sh 00:00:28.720 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:31.273 [Pipeline] sh 00:00:31.559 + git -C spdk log --oneline -n5 00:00:31.559 b18e1bd62 version: v24.09.1-pre 00:00:31.559 19524ad45 version: v24.09 00:00:31.559 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:00:31.559 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:00:31.559 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:00:31.579 [Pipeline] withCredentials 00:00:31.590 > git --version # timeout=10 00:00:31.601 > git --version # 'git version 2.39.2' 00:00:31.619 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:31.621 [Pipeline] { 00:00:31.631 [Pipeline] retry 00:00:31.633 [Pipeline] { 00:00:31.645 [Pipeline] sh 00:00:31.925 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:32.195 [Pipeline] } 00:00:32.213 [Pipeline] // retry 00:00:32.217 [Pipeline] } 00:00:32.233 [Pipeline] // withCredentials 00:00:32.242 [Pipeline] httpRequest 00:00:32.637 [Pipeline] echo 00:00:32.639 Sorcerer 10.211.164.20 is alive 00:00:32.649 [Pipeline] retry 00:00:32.651 [Pipeline] { 00:00:32.665 [Pipeline] httpRequest 00:00:32.669 HttpMethod: GET 00:00:32.669 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:32.670 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:32.689 Response Code: HTTP/1.1 200 OK 00:00:32.690 Success: Status code 200 is in the accepted range: 200,404 00:00:32.690 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:53.386 [Pipeline] } 00:00:53.403 [Pipeline] // retry 00:00:53.410 [Pipeline] sh 00:00:53.696 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:55.090 [Pipeline] sh 00:00:55.375 + git -C dpdk log --oneline -n5 00:00:55.375 eeb0605f11 version: 23.11.0 00:00:55.375 238778122a doc: update release notes for 23.11 00:00:55.375 46aa6b3cfc doc: fix description of RSS features 00:00:55.375 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:55.375 7e421ae345 devtools: support skipping forbid rule check 00:00:55.393 [Pipeline] writeFile 00:00:55.407 [Pipeline] sh 00:00:55.694 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:55.706 [Pipeline] sh 00:00:55.990 + cat autorun-spdk.conf 00:00:55.990 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.990 SPDK_RUN_ASAN=1 00:00:55.990 SPDK_RUN_UBSAN=1 00:00:55.990 SPDK_TEST_RAID=1 00:00:55.990 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:55.990 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:55.990 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:55.998 RUN_NIGHTLY=1 00:00:56.000 [Pipeline] } 00:00:56.012 [Pipeline] // stage 00:00:56.025 [Pipeline] stage 00:00:56.026 [Pipeline] { (Run VM) 00:00:56.038 [Pipeline] sh 00:00:56.323 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:56.323 + echo 'Start stage prepare_nvme.sh' 00:00:56.323 Start stage prepare_nvme.sh 00:00:56.323 + [[ -n 1 ]] 00:00:56.323 + disk_prefix=ex1 00:00:56.323 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:56.323 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:56.323 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:56.323 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.323 ++ SPDK_RUN_ASAN=1 00:00:56.323 ++ SPDK_RUN_UBSAN=1 00:00:56.323 ++ SPDK_TEST_RAID=1 00:00:56.323 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:56.323 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:56.323 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:56.323 ++ RUN_NIGHTLY=1 00:00:56.323 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:56.323 + nvme_files=() 00:00:56.323 + declare -A nvme_files 00:00:56.323 + backend_dir=/var/lib/libvirt/images/backends 00:00:56.323 + nvme_files['nvme.img']=5G 00:00:56.323 + nvme_files['nvme-cmb.img']=5G 00:00:56.323 + nvme_files['nvme-multi0.img']=4G 00:00:56.323 + nvme_files['nvme-multi1.img']=4G 00:00:56.323 + nvme_files['nvme-multi2.img']=4G 00:00:56.323 + nvme_files['nvme-openstack.img']=8G 00:00:56.323 + nvme_files['nvme-zns.img']=5G 00:00:56.323 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:56.323 + (( SPDK_TEST_FTL == 1 )) 00:00:56.323 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:56.323 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:56.323 + for nvme in "${!nvme_files[@]}" 00:00:56.323 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:56.323 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.323 + for nvme in "${!nvme_files[@]}" 00:00:56.323 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:56.323 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:56.323 + for nvme in "${!nvme_files[@]}" 00:00:56.323 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:56.323 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:56.323 + for nvme in "${!nvme_files[@]}" 00:00:56.323 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:56.323 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:56.323 + for nvme in "${!nvme_files[@]}" 00:00:56.323 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:56.323 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.323 + for nvme in "${!nvme_files[@]}" 00:00:56.323 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:56.323 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.323 + for nvme in "${!nvme_files[@]}" 00:00:56.323 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:56.323 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:56.584 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:56.584 + echo 'End stage prepare_nvme.sh' 00:00:56.584 End stage prepare_nvme.sh 00:00:56.596 [Pipeline] sh 00:00:56.880 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:56.880 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:00:56.880 00:00:56.880 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:56.880 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:56.880 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:56.880 HELP=0 00:00:56.880 DRY_RUN=0 00:00:56.880 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:00:56.880 NVME_DISKS_TYPE=nvme,nvme, 00:00:56.880 NVME_AUTO_CREATE=0 00:00:56.880 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:00:56.880 NVME_CMB=,, 00:00:56.880 NVME_PMR=,, 00:00:56.880 NVME_ZNS=,, 00:00:56.880 NVME_MS=,, 00:00:56.880 NVME_FDP=,, 00:00:56.880 SPDK_VAGRANT_DISTRO=fedora39 00:00:56.880 SPDK_VAGRANT_VMCPU=10 00:00:56.880 SPDK_VAGRANT_VMRAM=12288 00:00:56.880 SPDK_VAGRANT_PROVIDER=libvirt 00:00:56.880 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:56.880 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:56.880 SPDK_OPENSTACK_NETWORK=0 00:00:56.880 VAGRANT_PACKAGE_BOX=0 00:00:56.880 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:56.880 FORCE_DISTRO=true 00:00:56.880 VAGRANT_BOX_VERSION= 00:00:56.880 EXTRA_VAGRANTFILES= 00:00:56.880 NIC_MODEL=virtio 00:00:56.880 00:00:56.880 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:56.880 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:58.788 Bringing machine 'default' up with 'libvirt' provider... 00:00:59.359 ==> default: Creating image (snapshot of base box volume). 00:00:59.359 ==> default: Creating domain with the following settings... 00:00:59.359 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734151809_7f71921af8db5219166f 00:00:59.359 ==> default: -- Domain type: kvm 00:00:59.359 ==> default: -- Cpus: 10 00:00:59.359 ==> default: -- Feature: acpi 00:00:59.359 ==> default: -- Feature: apic 00:00:59.359 ==> default: -- Feature: pae 00:00:59.359 ==> default: -- Memory: 12288M 00:00:59.359 ==> default: -- Memory Backing: hugepages: 00:00:59.359 ==> default: -- Management MAC: 00:00:59.359 ==> default: -- Loader: 00:00:59.359 ==> default: -- Nvram: 00:00:59.359 ==> default: -- Base box: spdk/fedora39 00:00:59.359 ==> default: -- Storage pool: default 00:00:59.359 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734151809_7f71921af8db5219166f.img (20G) 00:00:59.359 ==> default: -- Volume Cache: default 00:00:59.359 ==> default: -- Kernel: 00:00:59.359 ==> default: -- Initrd: 00:00:59.359 ==> default: -- Graphics Type: vnc 00:00:59.359 ==> default: -- Graphics Port: -1 00:00:59.359 ==> default: -- Graphics IP: 127.0.0.1 00:00:59.359 ==> default: -- Graphics Password: Not defined 00:00:59.359 ==> default: -- Video Type: cirrus 00:00:59.359 ==> default: -- Video VRAM: 9216 00:00:59.359 ==> default: -- Sound Type: 00:00:59.359 ==> default: -- Keymap: en-us 00:00:59.359 ==> default: -- TPM Path: 00:00:59.359 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:59.359 ==> default: -- Command line args: 00:00:59.359 ==> default: -> value=-device, 00:00:59.359 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:59.359 ==> default: -> value=-drive, 00:00:59.359 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:00:59.359 ==> default: -> value=-device, 00:00:59.359 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:59.359 ==> default: -> value=-device, 00:00:59.359 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:59.359 ==> default: -> value=-drive, 00:00:59.359 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:59.359 ==> default: -> value=-device, 00:00:59.359 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:59.359 ==> default: -> value=-drive, 00:00:59.359 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:59.359 ==> default: -> value=-device, 00:00:59.359 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:59.359 ==> default: -> value=-drive, 00:00:59.359 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:59.359 ==> default: -> value=-device, 00:00:59.359 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:59.619 ==> default: Creating shared folders metadata... 00:00:59.619 ==> default: Starting domain. 00:01:01.001 ==> default: Waiting for domain to get an IP address... 00:01:19.111 ==> default: Waiting for SSH to become available... 00:01:19.111 ==> default: Configuring and enabling network interfaces... 00:01:24.393 default: SSH address: 192.168.121.228:22 00:01:24.393 default: SSH username: vagrant 00:01:24.393 default: SSH auth method: private key 00:01:27.691 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:35.827 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:41.104 ==> default: Mounting SSHFS shared folder... 00:01:43.642 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:43.642 ==> default: Checking Mount.. 00:01:45.550 ==> default: Folder Successfully Mounted! 00:01:45.550 ==> default: Running provisioner: file... 00:01:46.488 default: ~/.gitconfig => .gitconfig 00:01:47.057 00:01:47.057 SUCCESS! 00:01:47.057 00:01:47.057 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:47.057 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:47.057 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:47.057 00:01:47.066 [Pipeline] } 00:01:47.080 [Pipeline] // stage 00:01:47.089 [Pipeline] dir 00:01:47.089 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:47.091 [Pipeline] { 00:01:47.102 [Pipeline] catchError 00:01:47.104 [Pipeline] { 00:01:47.115 [Pipeline] sh 00:01:47.397 + vagrant+ ssh-config --host vagrant 00:01:47.397 sed -ne /^Host/,$p 00:01:47.397 + tee ssh_conf 00:01:49.932 Host vagrant 00:01:49.932 HostName 192.168.121.228 00:01:49.932 User vagrant 00:01:49.932 Port 22 00:01:49.932 UserKnownHostsFile /dev/null 00:01:49.932 StrictHostKeyChecking no 00:01:49.932 PasswordAuthentication no 00:01:49.932 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:49.932 IdentitiesOnly yes 00:01:49.932 LogLevel FATAL 00:01:49.932 ForwardAgent yes 00:01:49.932 ForwardX11 yes 00:01:49.932 00:01:49.946 [Pipeline] withEnv 00:01:49.948 [Pipeline] { 00:01:49.961 [Pipeline] sh 00:01:50.245 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:50.246 source /etc/os-release 00:01:50.246 [[ -e /image.version ]] && img=$(< /image.version) 00:01:50.246 # Minimal, systemd-like check. 00:01:50.246 if [[ -e /.dockerenv ]]; then 00:01:50.246 # Clear garbage from the node's name: 00:01:50.246 # agt-er_autotest_547-896 -> autotest_547-896 00:01:50.246 # $HOSTNAME is the actual container id 00:01:50.246 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:50.246 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:50.246 # We can assume this is a mount from a host where container is running, 00:01:50.246 # so fetch its hostname to easily identify the target swarm worker. 00:01:50.246 container="$(< /etc/hostname) ($agent)" 00:01:50.246 else 00:01:50.246 # Fallback 00:01:50.246 container=$agent 00:01:50.246 fi 00:01:50.246 fi 00:01:50.246 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:50.246 00:01:50.518 [Pipeline] } 00:01:50.533 [Pipeline] // withEnv 00:01:50.541 [Pipeline] setCustomBuildProperty 00:01:50.555 [Pipeline] stage 00:01:50.557 [Pipeline] { (Tests) 00:01:50.573 [Pipeline] sh 00:01:50.856 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:51.128 [Pipeline] sh 00:01:51.409 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:51.681 [Pipeline] timeout 00:01:51.681 Timeout set to expire in 1 hr 30 min 00:01:51.683 [Pipeline] { 00:01:51.696 [Pipeline] sh 00:01:51.978 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:52.548 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:01:52.560 [Pipeline] sh 00:01:52.857 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:53.137 [Pipeline] sh 00:01:53.420 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:53.697 [Pipeline] sh 00:01:53.982 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:54.242 ++ readlink -f spdk_repo 00:01:54.242 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:54.242 + [[ -n /home/vagrant/spdk_repo ]] 00:01:54.242 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:54.242 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:54.242 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:54.242 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:54.242 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:54.242 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:54.242 + cd /home/vagrant/spdk_repo 00:01:54.242 + source /etc/os-release 00:01:54.242 ++ NAME='Fedora Linux' 00:01:54.242 ++ VERSION='39 (Cloud Edition)' 00:01:54.242 ++ ID=fedora 00:01:54.242 ++ VERSION_ID=39 00:01:54.242 ++ VERSION_CODENAME= 00:01:54.242 ++ PLATFORM_ID=platform:f39 00:01:54.242 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:54.242 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:54.242 ++ LOGO=fedora-logo-icon 00:01:54.242 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:54.242 ++ HOME_URL=https://fedoraproject.org/ 00:01:54.242 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:54.242 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:54.242 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:54.242 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:54.242 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:54.242 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:54.242 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:54.242 ++ SUPPORT_END=2024-11-12 00:01:54.242 ++ VARIANT='Cloud Edition' 00:01:54.242 ++ VARIANT_ID=cloud 00:01:54.242 + uname -a 00:01:54.242 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:54.242 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:54.810 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:54.810 Hugepages 00:01:54.810 node hugesize free / total 00:01:54.810 node0 1048576kB 0 / 0 00:01:54.810 node0 2048kB 0 / 0 00:01:54.810 00:01:54.810 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:54.810 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:54.810 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:54.810 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:54.810 + rm -f /tmp/spdk-ld-path 00:01:54.810 + source autorun-spdk.conf 00:01:54.810 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:54.810 ++ SPDK_RUN_ASAN=1 00:01:54.810 ++ SPDK_RUN_UBSAN=1 00:01:54.810 ++ SPDK_TEST_RAID=1 00:01:54.810 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:54.810 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:54.810 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:54.810 ++ RUN_NIGHTLY=1 00:01:54.810 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:54.810 + [[ -n '' ]] 00:01:54.810 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:55.070 + for M in /var/spdk/build-*-manifest.txt 00:01:55.070 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:55.070 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:55.070 + for M in /var/spdk/build-*-manifest.txt 00:01:55.070 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:55.070 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:55.070 + for M in /var/spdk/build-*-manifest.txt 00:01:55.070 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:55.070 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:55.070 ++ uname 00:01:55.070 + [[ Linux == \L\i\n\u\x ]] 00:01:55.070 + sudo dmesg -T 00:01:55.070 + sudo dmesg --clear 00:01:55.070 + dmesg_pid=6161 00:01:55.070 + [[ Fedora Linux == FreeBSD ]] 00:01:55.070 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.070 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.070 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:55.070 + sudo dmesg -Tw 00:01:55.070 + [[ -x /usr/src/fio-static/fio ]] 00:01:55.070 + export FIO_BIN=/usr/src/fio-static/fio 00:01:55.070 + FIO_BIN=/usr/src/fio-static/fio 00:01:55.070 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:55.070 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:55.070 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:55.070 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.070 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.070 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:55.071 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.071 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.071 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:55.071 Test configuration: 00:01:55.071 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.071 SPDK_RUN_ASAN=1 00:01:55.071 SPDK_RUN_UBSAN=1 00:01:55.071 SPDK_TEST_RAID=1 00:01:55.071 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:55.071 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:55.071 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:55.331 RUN_NIGHTLY=1 04:51:05 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:55.331 04:51:05 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:55.331 04:51:05 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:55.331 04:51:05 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:55.331 04:51:05 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:55.331 04:51:05 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:55.331 04:51:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.331 04:51:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.331 04:51:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.331 04:51:05 -- paths/export.sh@5 -- $ export PATH 00:01:55.331 04:51:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.331 04:51:05 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:55.331 04:51:05 -- common/autobuild_common.sh@479 -- $ date +%s 00:01:55.331 04:51:05 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1734151865.XXXXXX 00:01:55.331 04:51:05 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1734151865.STPrtO 00:01:55.331 04:51:05 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:01:55.331 04:51:05 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:01:55.331 04:51:05 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:55.331 04:51:05 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:55.331 04:51:05 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:55.331 04:51:05 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:55.331 04:51:05 -- common/autobuild_common.sh@495 -- $ get_config_params 00:01:55.331 04:51:05 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:55.331 04:51:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.331 04:51:06 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:01:55.331 04:51:06 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:01:55.331 04:51:06 -- pm/common@17 -- $ local monitor 00:01:55.331 04:51:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.331 04:51:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.331 04:51:06 -- pm/common@25 -- $ sleep 1 00:01:55.331 04:51:06 -- pm/common@21 -- $ date +%s 00:01:55.331 04:51:06 -- pm/common@21 -- $ date +%s 00:01:55.331 04:51:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734151866 00:01:55.331 04:51:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734151866 00:01:55.331 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734151866_collect-vmstat.pm.log 00:01:55.331 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734151866_collect-cpu-load.pm.log 00:01:56.271 04:51:07 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:01:56.271 04:51:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:56.271 04:51:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:56.271 04:51:07 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:56.271 04:51:07 -- spdk/autobuild.sh@16 -- $ date -u 00:01:56.271 Sat Dec 14 04:51:07 AM UTC 2024 00:01:56.272 04:51:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:56.272 v24.09-1-gb18e1bd62 00:01:56.272 04:51:07 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:56.272 04:51:07 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:56.272 04:51:07 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:56.272 04:51:07 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:56.272 04:51:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.272 ************************************ 00:01:56.272 START TEST asan 00:01:56.272 ************************************ 00:01:56.272 using asan 00:01:56.272 04:51:07 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:56.272 00:01:56.272 real 0m0.000s 00:01:56.272 user 0m0.000s 00:01:56.272 sys 0m0.000s 00:01:56.272 04:51:07 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:56.272 04:51:07 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:56.272 ************************************ 00:01:56.272 END TEST asan 00:01:56.272 ************************************ 00:01:56.272 04:51:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:56.272 04:51:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:56.272 04:51:07 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:56.272 04:51:07 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:56.272 04:51:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.272 ************************************ 00:01:56.272 START TEST ubsan 00:01:56.272 ************************************ 00:01:56.272 using ubsan 00:01:56.272 04:51:07 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:56.272 00:01:56.272 real 0m0.000s 00:01:56.272 user 0m0.000s 00:01:56.272 sys 0m0.000s 00:01:56.272 04:51:07 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:56.272 04:51:07 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:56.272 ************************************ 00:01:56.272 END TEST ubsan 00:01:56.272 ************************************ 00:01:56.532 04:51:07 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:56.532 04:51:07 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:56.532 04:51:07 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:56.532 04:51:07 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:56.532 04:51:07 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:56.532 04:51:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.532 ************************************ 00:01:56.532 START TEST build_native_dpdk 00:01:56.532 ************************************ 00:01:56.532 04:51:07 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:56.532 eeb0605f11 version: 23.11.0 00:01:56.532 238778122a doc: update release notes for 23.11 00:01:56.532 46aa6b3cfc doc: fix description of RSS features 00:01:56.532 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:56.532 7e421ae345 devtools: support skipping forbid rule check 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:56.532 patching file config/rte_config.h 00:01:56.532 Hunk #1 succeeded at 60 (offset 1 line). 00:01:56.532 04:51:07 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:56.532 04:51:07 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:56.533 04:51:07 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:56.533 patching file lib/pcapng/rte_pcapng.c 00:01:56.533 04:51:07 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:56.533 04:51:07 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:56.533 04:51:07 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:56.533 04:51:07 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:56.533 04:51:07 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:56.533 04:51:07 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:56.533 04:51:07 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:03.108 The Meson build system 00:02:03.108 Version: 1.5.0 00:02:03.108 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:03.108 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:03.108 Build type: native build 00:02:03.108 Program cat found: YES (/usr/bin/cat) 00:02:03.108 Project name: DPDK 00:02:03.108 Project version: 23.11.0 00:02:03.108 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:03.108 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:03.108 Host machine cpu family: x86_64 00:02:03.108 Host machine cpu: x86_64 00:02:03.108 Message: ## Building in Developer Mode ## 00:02:03.108 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:03.108 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:03.108 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:03.108 Program python3 found: YES (/usr/bin/python3) 00:02:03.108 Program cat found: YES (/usr/bin/cat) 00:02:03.108 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:03.108 Compiler for C supports arguments -march=native: YES 00:02:03.108 Checking for size of "void *" : 8 00:02:03.108 Checking for size of "void *" : 8 (cached) 00:02:03.108 Library m found: YES 00:02:03.108 Library numa found: YES 00:02:03.108 Has header "numaif.h" : YES 00:02:03.108 Library fdt found: NO 00:02:03.108 Library execinfo found: NO 00:02:03.108 Has header "execinfo.h" : YES 00:02:03.108 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:03.108 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:03.108 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:03.108 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:03.108 Run-time dependency openssl found: YES 3.1.1 00:02:03.108 Run-time dependency libpcap found: YES 1.10.4 00:02:03.108 Has header "pcap.h" with dependency libpcap: YES 00:02:03.108 Compiler for C supports arguments -Wcast-qual: YES 00:02:03.108 Compiler for C supports arguments -Wdeprecated: YES 00:02:03.108 Compiler for C supports arguments -Wformat: YES 00:02:03.108 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:03.108 Compiler for C supports arguments -Wformat-security: NO 00:02:03.108 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:03.109 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:03.109 Compiler for C supports arguments -Wnested-externs: YES 00:02:03.109 Compiler for C supports arguments -Wold-style-definition: YES 00:02:03.109 Compiler for C supports arguments -Wpointer-arith: YES 00:02:03.109 Compiler for C supports arguments -Wsign-compare: YES 00:02:03.109 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:03.109 Compiler for C supports arguments -Wundef: YES 00:02:03.109 Compiler for C supports arguments -Wwrite-strings: YES 00:02:03.109 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:03.109 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:03.109 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:03.109 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:03.109 Program objdump found: YES (/usr/bin/objdump) 00:02:03.109 Compiler for C supports arguments -mavx512f: YES 00:02:03.109 Checking if "AVX512 checking" compiles: YES 00:02:03.109 Fetching value of define "__SSE4_2__" : 1 00:02:03.109 Fetching value of define "__AES__" : 1 00:02:03.109 Fetching value of define "__AVX__" : 1 00:02:03.109 Fetching value of define "__AVX2__" : 1 00:02:03.109 Fetching value of define "__AVX512BW__" : 1 00:02:03.109 Fetching value of define "__AVX512CD__" : 1 00:02:03.109 Fetching value of define "__AVX512DQ__" : 1 00:02:03.109 Fetching value of define "__AVX512F__" : 1 00:02:03.109 Fetching value of define "__AVX512VL__" : 1 00:02:03.109 Fetching value of define "__PCLMUL__" : 1 00:02:03.109 Fetching value of define "__RDRND__" : 1 00:02:03.109 Fetching value of define "__RDSEED__" : 1 00:02:03.109 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:03.109 Fetching value of define "__znver1__" : (undefined) 00:02:03.109 Fetching value of define "__znver2__" : (undefined) 00:02:03.109 Fetching value of define "__znver3__" : (undefined) 00:02:03.109 Fetching value of define "__znver4__" : (undefined) 00:02:03.109 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:03.109 Message: lib/log: Defining dependency "log" 00:02:03.109 Message: lib/kvargs: Defining dependency "kvargs" 00:02:03.109 Message: lib/telemetry: Defining dependency "telemetry" 00:02:03.109 Checking for function "getentropy" : NO 00:02:03.109 Message: lib/eal: Defining dependency "eal" 00:02:03.109 Message: lib/ring: Defining dependency "ring" 00:02:03.109 Message: lib/rcu: Defining dependency "rcu" 00:02:03.109 Message: lib/mempool: Defining dependency "mempool" 00:02:03.109 Message: lib/mbuf: Defining dependency "mbuf" 00:02:03.109 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:03.109 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:03.109 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:03.109 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:03.109 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:03.109 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:03.109 Compiler for C supports arguments -mpclmul: YES 00:02:03.109 Compiler for C supports arguments -maes: YES 00:02:03.109 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:03.109 Compiler for C supports arguments -mavx512bw: YES 00:02:03.109 Compiler for C supports arguments -mavx512dq: YES 00:02:03.109 Compiler for C supports arguments -mavx512vl: YES 00:02:03.109 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:03.109 Compiler for C supports arguments -mavx2: YES 00:02:03.109 Compiler for C supports arguments -mavx: YES 00:02:03.109 Message: lib/net: Defining dependency "net" 00:02:03.109 Message: lib/meter: Defining dependency "meter" 00:02:03.109 Message: lib/ethdev: Defining dependency "ethdev" 00:02:03.109 Message: lib/pci: Defining dependency "pci" 00:02:03.109 Message: lib/cmdline: Defining dependency "cmdline" 00:02:03.109 Message: lib/metrics: Defining dependency "metrics" 00:02:03.109 Message: lib/hash: Defining dependency "hash" 00:02:03.109 Message: lib/timer: Defining dependency "timer" 00:02:03.109 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:03.109 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:03.109 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:03.109 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:03.109 Message: lib/acl: Defining dependency "acl" 00:02:03.109 Message: lib/bbdev: Defining dependency "bbdev" 00:02:03.109 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:03.109 Run-time dependency libelf found: YES 0.191 00:02:03.109 Message: lib/bpf: Defining dependency "bpf" 00:02:03.109 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:03.109 Message: lib/compressdev: Defining dependency "compressdev" 00:02:03.109 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:03.109 Message: lib/distributor: Defining dependency "distributor" 00:02:03.109 Message: lib/dmadev: Defining dependency "dmadev" 00:02:03.109 Message: lib/efd: Defining dependency "efd" 00:02:03.109 Message: lib/eventdev: Defining dependency "eventdev" 00:02:03.109 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:03.109 Message: lib/gpudev: Defining dependency "gpudev" 00:02:03.109 Message: lib/gro: Defining dependency "gro" 00:02:03.109 Message: lib/gso: Defining dependency "gso" 00:02:03.109 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:03.109 Message: lib/jobstats: Defining dependency "jobstats" 00:02:03.109 Message: lib/latencystats: Defining dependency "latencystats" 00:02:03.109 Message: lib/lpm: Defining dependency "lpm" 00:02:03.109 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:03.109 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:03.109 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:03.109 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:03.109 Message: lib/member: Defining dependency "member" 00:02:03.109 Message: lib/pcapng: Defining dependency "pcapng" 00:02:03.109 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:03.109 Message: lib/power: Defining dependency "power" 00:02:03.109 Message: lib/rawdev: Defining dependency "rawdev" 00:02:03.109 Message: lib/regexdev: Defining dependency "regexdev" 00:02:03.109 Message: lib/mldev: Defining dependency "mldev" 00:02:03.109 Message: lib/rib: Defining dependency "rib" 00:02:03.109 Message: lib/reorder: Defining dependency "reorder" 00:02:03.109 Message: lib/sched: Defining dependency "sched" 00:02:03.109 Message: lib/security: Defining dependency "security" 00:02:03.109 Message: lib/stack: Defining dependency "stack" 00:02:03.109 Has header "linux/userfaultfd.h" : YES 00:02:03.109 Has header "linux/vduse.h" : YES 00:02:03.109 Message: lib/vhost: Defining dependency "vhost" 00:02:03.109 Message: lib/ipsec: Defining dependency "ipsec" 00:02:03.109 Message: lib/pdcp: Defining dependency "pdcp" 00:02:03.109 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:03.109 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:03.109 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:03.109 Message: lib/fib: Defining dependency "fib" 00:02:03.109 Message: lib/port: Defining dependency "port" 00:02:03.109 Message: lib/pdump: Defining dependency "pdump" 00:02:03.109 Message: lib/table: Defining dependency "table" 00:02:03.109 Message: lib/pipeline: Defining dependency "pipeline" 00:02:03.109 Message: lib/graph: Defining dependency "graph" 00:02:03.109 Message: lib/node: Defining dependency "node" 00:02:03.109 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:03.109 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:03.109 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:03.679 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:03.679 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:03.679 Compiler for C supports arguments -Wno-unused-value: YES 00:02:03.679 Compiler for C supports arguments -Wno-format: YES 00:02:03.679 Compiler for C supports arguments -Wno-format-security: YES 00:02:03.679 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:03.679 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:03.679 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:03.679 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:03.679 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:03.679 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:03.679 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:03.679 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:03.679 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:03.679 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:03.679 Has header "sys/epoll.h" : YES 00:02:03.679 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:03.679 Configuring doxy-api-html.conf using configuration 00:02:03.679 Configuring doxy-api-man.conf using configuration 00:02:03.679 Program mandb found: YES (/usr/bin/mandb) 00:02:03.679 Program sphinx-build found: NO 00:02:03.679 Configuring rte_build_config.h using configuration 00:02:03.679 Message: 00:02:03.679 ================= 00:02:03.679 Applications Enabled 00:02:03.679 ================= 00:02:03.679 00:02:03.679 apps: 00:02:03.679 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:03.679 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:03.679 test-pmd, test-regex, test-sad, test-security-perf, 00:02:03.679 00:02:03.679 Message: 00:02:03.679 ================= 00:02:03.679 Libraries Enabled 00:02:03.679 ================= 00:02:03.679 00:02:03.679 libs: 00:02:03.679 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:03.679 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:03.679 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:03.679 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:03.679 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:03.679 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:03.679 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:03.679 00:02:03.679 00:02:03.679 Message: 00:02:03.679 =============== 00:02:03.679 Drivers Enabled 00:02:03.679 =============== 00:02:03.679 00:02:03.679 common: 00:02:03.679 00:02:03.679 bus: 00:02:03.679 pci, vdev, 00:02:03.679 mempool: 00:02:03.679 ring, 00:02:03.679 dma: 00:02:03.679 00:02:03.679 net: 00:02:03.679 i40e, 00:02:03.679 raw: 00:02:03.679 00:02:03.679 crypto: 00:02:03.679 00:02:03.679 compress: 00:02:03.679 00:02:03.679 regex: 00:02:03.679 00:02:03.679 ml: 00:02:03.679 00:02:03.679 vdpa: 00:02:03.679 00:02:03.679 event: 00:02:03.679 00:02:03.679 baseband: 00:02:03.679 00:02:03.679 gpu: 00:02:03.679 00:02:03.679 00:02:03.679 Message: 00:02:03.679 ================= 00:02:03.679 Content Skipped 00:02:03.679 ================= 00:02:03.679 00:02:03.679 apps: 00:02:03.679 00:02:03.679 libs: 00:02:03.679 00:02:03.679 drivers: 00:02:03.679 common/cpt: not in enabled drivers build config 00:02:03.679 common/dpaax: not in enabled drivers build config 00:02:03.679 common/iavf: not in enabled drivers build config 00:02:03.679 common/idpf: not in enabled drivers build config 00:02:03.679 common/mvep: not in enabled drivers build config 00:02:03.679 common/octeontx: not in enabled drivers build config 00:02:03.679 bus/auxiliary: not in enabled drivers build config 00:02:03.679 bus/cdx: not in enabled drivers build config 00:02:03.679 bus/dpaa: not in enabled drivers build config 00:02:03.679 bus/fslmc: not in enabled drivers build config 00:02:03.679 bus/ifpga: not in enabled drivers build config 00:02:03.679 bus/platform: not in enabled drivers build config 00:02:03.679 bus/vmbus: not in enabled drivers build config 00:02:03.679 common/cnxk: not in enabled drivers build config 00:02:03.679 common/mlx5: not in enabled drivers build config 00:02:03.679 common/nfp: not in enabled drivers build config 00:02:03.679 common/qat: not in enabled drivers build config 00:02:03.679 common/sfc_efx: not in enabled drivers build config 00:02:03.679 mempool/bucket: not in enabled drivers build config 00:02:03.679 mempool/cnxk: not in enabled drivers build config 00:02:03.679 mempool/dpaa: not in enabled drivers build config 00:02:03.679 mempool/dpaa2: not in enabled drivers build config 00:02:03.679 mempool/octeontx: not in enabled drivers build config 00:02:03.679 mempool/stack: not in enabled drivers build config 00:02:03.679 dma/cnxk: not in enabled drivers build config 00:02:03.679 dma/dpaa: not in enabled drivers build config 00:02:03.679 dma/dpaa2: not in enabled drivers build config 00:02:03.679 dma/hisilicon: not in enabled drivers build config 00:02:03.679 dma/idxd: not in enabled drivers build config 00:02:03.679 dma/ioat: not in enabled drivers build config 00:02:03.679 dma/skeleton: not in enabled drivers build config 00:02:03.679 net/af_packet: not in enabled drivers build config 00:02:03.679 net/af_xdp: not in enabled drivers build config 00:02:03.679 net/ark: not in enabled drivers build config 00:02:03.679 net/atlantic: not in enabled drivers build config 00:02:03.679 net/avp: not in enabled drivers build config 00:02:03.679 net/axgbe: not in enabled drivers build config 00:02:03.679 net/bnx2x: not in enabled drivers build config 00:02:03.679 net/bnxt: not in enabled drivers build config 00:02:03.679 net/bonding: not in enabled drivers build config 00:02:03.679 net/cnxk: not in enabled drivers build config 00:02:03.679 net/cpfl: not in enabled drivers build config 00:02:03.679 net/cxgbe: not in enabled drivers build config 00:02:03.679 net/dpaa: not in enabled drivers build config 00:02:03.679 net/dpaa2: not in enabled drivers build config 00:02:03.679 net/e1000: not in enabled drivers build config 00:02:03.679 net/ena: not in enabled drivers build config 00:02:03.679 net/enetc: not in enabled drivers build config 00:02:03.679 net/enetfec: not in enabled drivers build config 00:02:03.679 net/enic: not in enabled drivers build config 00:02:03.679 net/failsafe: not in enabled drivers build config 00:02:03.679 net/fm10k: not in enabled drivers build config 00:02:03.679 net/gve: not in enabled drivers build config 00:02:03.679 net/hinic: not in enabled drivers build config 00:02:03.679 net/hns3: not in enabled drivers build config 00:02:03.679 net/iavf: not in enabled drivers build config 00:02:03.679 net/ice: not in enabled drivers build config 00:02:03.679 net/idpf: not in enabled drivers build config 00:02:03.679 net/igc: not in enabled drivers build config 00:02:03.679 net/ionic: not in enabled drivers build config 00:02:03.679 net/ipn3ke: not in enabled drivers build config 00:02:03.679 net/ixgbe: not in enabled drivers build config 00:02:03.679 net/mana: not in enabled drivers build config 00:02:03.679 net/memif: not in enabled drivers build config 00:02:03.679 net/mlx4: not in enabled drivers build config 00:02:03.679 net/mlx5: not in enabled drivers build config 00:02:03.679 net/mvneta: not in enabled drivers build config 00:02:03.679 net/mvpp2: not in enabled drivers build config 00:02:03.679 net/netvsc: not in enabled drivers build config 00:02:03.679 net/nfb: not in enabled drivers build config 00:02:03.679 net/nfp: not in enabled drivers build config 00:02:03.679 net/ngbe: not in enabled drivers build config 00:02:03.679 net/null: not in enabled drivers build config 00:02:03.679 net/octeontx: not in enabled drivers build config 00:02:03.679 net/octeon_ep: not in enabled drivers build config 00:02:03.679 net/pcap: not in enabled drivers build config 00:02:03.679 net/pfe: not in enabled drivers build config 00:02:03.679 net/qede: not in enabled drivers build config 00:02:03.679 net/ring: not in enabled drivers build config 00:02:03.679 net/sfc: not in enabled drivers build config 00:02:03.679 net/softnic: not in enabled drivers build config 00:02:03.679 net/tap: not in enabled drivers build config 00:02:03.679 net/thunderx: not in enabled drivers build config 00:02:03.679 net/txgbe: not in enabled drivers build config 00:02:03.679 net/vdev_netvsc: not in enabled drivers build config 00:02:03.679 net/vhost: not in enabled drivers build config 00:02:03.679 net/virtio: not in enabled drivers build config 00:02:03.679 net/vmxnet3: not in enabled drivers build config 00:02:03.679 raw/cnxk_bphy: not in enabled drivers build config 00:02:03.680 raw/cnxk_gpio: not in enabled drivers build config 00:02:03.680 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:03.680 raw/ifpga: not in enabled drivers build config 00:02:03.680 raw/ntb: not in enabled drivers build config 00:02:03.680 raw/skeleton: not in enabled drivers build config 00:02:03.680 crypto/armv8: not in enabled drivers build config 00:02:03.680 crypto/bcmfs: not in enabled drivers build config 00:02:03.680 crypto/caam_jr: not in enabled drivers build config 00:02:03.680 crypto/ccp: not in enabled drivers build config 00:02:03.680 crypto/cnxk: not in enabled drivers build config 00:02:03.680 crypto/dpaa_sec: not in enabled drivers build config 00:02:03.680 crypto/dpaa2_sec: not in enabled drivers build config 00:02:03.680 crypto/ipsec_mb: not in enabled drivers build config 00:02:03.680 crypto/mlx5: not in enabled drivers build config 00:02:03.680 crypto/mvsam: not in enabled drivers build config 00:02:03.680 crypto/nitrox: not in enabled drivers build config 00:02:03.680 crypto/null: not in enabled drivers build config 00:02:03.680 crypto/octeontx: not in enabled drivers build config 00:02:03.680 crypto/openssl: not in enabled drivers build config 00:02:03.680 crypto/scheduler: not in enabled drivers build config 00:02:03.680 crypto/uadk: not in enabled drivers build config 00:02:03.680 crypto/virtio: not in enabled drivers build config 00:02:03.680 compress/isal: not in enabled drivers build config 00:02:03.680 compress/mlx5: not in enabled drivers build config 00:02:03.680 compress/octeontx: not in enabled drivers build config 00:02:03.680 compress/zlib: not in enabled drivers build config 00:02:03.680 regex/mlx5: not in enabled drivers build config 00:02:03.680 regex/cn9k: not in enabled drivers build config 00:02:03.680 ml/cnxk: not in enabled drivers build config 00:02:03.680 vdpa/ifc: not in enabled drivers build config 00:02:03.680 vdpa/mlx5: not in enabled drivers build config 00:02:03.680 vdpa/nfp: not in enabled drivers build config 00:02:03.680 vdpa/sfc: not in enabled drivers build config 00:02:03.680 event/cnxk: not in enabled drivers build config 00:02:03.680 event/dlb2: not in enabled drivers build config 00:02:03.680 event/dpaa: not in enabled drivers build config 00:02:03.680 event/dpaa2: not in enabled drivers build config 00:02:03.680 event/dsw: not in enabled drivers build config 00:02:03.680 event/opdl: not in enabled drivers build config 00:02:03.680 event/skeleton: not in enabled drivers build config 00:02:03.680 event/sw: not in enabled drivers build config 00:02:03.680 event/octeontx: not in enabled drivers build config 00:02:03.680 baseband/acc: not in enabled drivers build config 00:02:03.680 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:03.680 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:03.680 baseband/la12xx: not in enabled drivers build config 00:02:03.680 baseband/null: not in enabled drivers build config 00:02:03.680 baseband/turbo_sw: not in enabled drivers build config 00:02:03.680 gpu/cuda: not in enabled drivers build config 00:02:03.680 00:02:03.680 00:02:03.680 Build targets in project: 217 00:02:03.680 00:02:03.680 DPDK 23.11.0 00:02:03.680 00:02:03.680 User defined options 00:02:03.680 libdir : lib 00:02:03.680 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:03.680 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:03.680 c_link_args : 00:02:03.680 enable_docs : false 00:02:03.680 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:03.680 enable_kmods : false 00:02:03.680 machine : native 00:02:03.680 tests : false 00:02:03.680 00:02:03.680 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:03.680 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:03.939 04:51:14 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:03.939 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:03.939 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:03.939 [2/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:03.939 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:03.939 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:03.939 [5/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:03.939 [6/707] Linking static target lib/librte_kvargs.a 00:02:04.199 [7/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:04.199 [8/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:04.199 [9/707] Linking static target lib/librte_log.a 00:02:04.199 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:04.199 [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.199 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:04.458 [13/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:04.458 [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:04.458 [15/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:04.458 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:04.458 [17/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.458 [18/707] Linking target lib/librte_log.so.24.0 00:02:04.458 [19/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:04.458 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:04.717 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:04.717 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:04.717 [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:04.717 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:04.717 [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:04.717 [26/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:04.717 [27/707] Linking target lib/librte_kvargs.so.24.0 00:02:04.717 [28/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:04.976 [29/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:04.976 [30/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:04.976 [31/707] Linking static target lib/librte_telemetry.a 00:02:04.976 [32/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:04.976 [33/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:04.976 [34/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:04.976 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:04.976 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:04.976 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:05.236 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:05.236 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:05.236 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:05.236 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:05.236 [42/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.236 [43/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:05.236 [44/707] Linking target lib/librte_telemetry.so.24.0 00:02:05.236 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:05.494 [46/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:05.494 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:05.494 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:05.494 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:05.494 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:05.494 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:05.494 [52/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:05.494 [53/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:05.494 [54/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:05.753 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:05.753 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:05.753 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:05.753 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:05.753 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:05.753 [60/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:05.753 [61/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:05.753 [62/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:05.753 [63/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:06.013 [64/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:06.013 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:06.013 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:06.013 [67/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:06.013 [68/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:06.013 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:06.272 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:06.272 [71/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:06.272 [72/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:06.272 [73/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:06.272 [74/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:06.272 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:06.272 [76/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:06.272 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:06.272 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:06.531 [79/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:06.531 [80/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:06.531 [81/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:06.531 [82/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:06.531 [83/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:06.531 [84/707] Linking static target lib/librte_ring.a 00:02:06.531 [85/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:06.791 [86/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:06.791 [87/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.791 [88/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:06.791 [89/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:06.791 [90/707] Linking static target lib/librte_eal.a 00:02:06.791 [91/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:06.791 [92/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:07.050 [93/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:07.050 [94/707] Linking static target lib/librte_mempool.a 00:02:07.050 [95/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:07.050 [96/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:07.050 [97/707] Linking static target lib/librte_rcu.a 00:02:07.310 [98/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:07.310 [99/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:07.310 [100/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:07.310 [101/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:07.310 [102/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:07.310 [103/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.310 [104/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:07.310 [105/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.569 [106/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:07.569 [107/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:07.569 [108/707] Linking static target lib/librte_net.a 00:02:07.569 [109/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:07.569 [110/707] Linking static target lib/librte_meter.a 00:02:07.569 [111/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:07.828 [112/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.828 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:07.828 [114/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:07.828 [115/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.828 [116/707] Linking static target lib/librte_mbuf.a 00:02:07.828 [117/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:07.828 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:08.088 [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.347 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:08.347 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:08.347 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:08.347 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:08.607 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:08.607 [125/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:08.607 [126/707] Linking static target lib/librte_pci.a 00:02:08.607 [127/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:08.607 [128/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:08.607 [129/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:08.607 [130/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:08.607 [131/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.865 [132/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:08.865 [133/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:08.865 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:08.865 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:08.865 [136/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:08.865 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:08.865 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:08.865 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:08.865 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:08.865 [141/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:09.124 [142/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:09.124 [143/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:09.124 [144/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:09.124 [145/707] Linking static target lib/librte_cmdline.a 00:02:09.383 [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:09.383 [147/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:09.383 [148/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:09.383 [149/707] Linking static target lib/librte_metrics.a 00:02:09.383 [150/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:09.642 [151/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:09.642 [152/707] Linking static target lib/librte_timer.a 00:02:09.642 [153/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.642 [154/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.642 [155/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:09.902 [156/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:09.902 [157/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.902 [158/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:10.161 [159/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:10.161 [160/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:10.421 [161/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:10.680 [162/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:10.680 [163/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:10.680 [164/707] Linking static target lib/librte_bitratestats.a 00:02:10.680 [165/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:10.680 [166/707] Linking static target lib/librte_bbdev.a 00:02:10.680 [167/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.680 [168/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:10.940 [169/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:11.199 [170/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:11.199 [171/707] Linking static target lib/librte_hash.a 00:02:11.199 [172/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.199 [173/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:11.199 [174/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:11.199 [175/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:11.460 [176/707] Linking static target lib/librte_ethdev.a 00:02:11.460 [177/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:11.460 [178/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:11.460 [179/707] Linking static target lib/acl/libavx2_tmp.a 00:02:11.460 [180/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.460 [181/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:11.460 [182/707] Linking target lib/librte_eal.so.24.0 00:02:11.460 [183/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.720 [184/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:11.720 [185/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:11.720 [186/707] Linking target lib/librte_ring.so.24.0 00:02:11.720 [187/707] Linking target lib/librte_meter.so.24.0 00:02:11.720 [188/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:11.720 [189/707] Linking target lib/librte_pci.so.24.0 00:02:11.720 [190/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:11.720 [191/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:11.720 [192/707] Linking target lib/librte_timer.so.24.0 00:02:11.720 [193/707] Linking target lib/librte_rcu.so.24.0 00:02:11.980 [194/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:11.980 [195/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:11.980 [196/707] Linking static target lib/librte_cfgfile.a 00:02:11.980 [197/707] Linking target lib/librte_mempool.so.24.0 00:02:11.980 [198/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:11.980 [199/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:11.980 [200/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:11.980 [201/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:11.980 [202/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:11.980 [203/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:11.980 [204/707] Linking target lib/librte_mbuf.so.24.0 00:02:11.980 [205/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:11.980 [206/707] Linking static target lib/librte_bpf.a 00:02:12.239 [207/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:12.239 [208/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.239 [209/707] Linking target lib/librte_bbdev.so.24.0 00:02:12.239 [210/707] Linking target lib/librte_net.so.24.0 00:02:12.239 [211/707] Linking target lib/librte_cfgfile.so.24.0 00:02:12.239 [212/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:12.239 [213/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:12.239 [214/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.239 [215/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:12.239 [216/707] Linking target lib/librte_cmdline.so.24.0 00:02:12.239 [217/707] Linking static target lib/librte_compressdev.a 00:02:12.239 [218/707] Linking target lib/librte_hash.so.24.0 00:02:12.499 [219/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:12.499 [220/707] Linking static target lib/librte_acl.a 00:02:12.499 [221/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:12.499 [222/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:12.499 [223/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:12.759 [224/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:12.759 [225/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.759 [226/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:12.759 [227/707] Linking static target lib/librte_distributor.a 00:02:12.759 [228/707] Linking target lib/librte_acl.so.24.0 00:02:12.759 [229/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:12.759 [230/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.759 [231/707] Linking target lib/librte_compressdev.so.24.0 00:02:12.759 [232/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:12.759 [233/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.019 [234/707] Linking target lib/librte_distributor.so.24.0 00:02:13.019 [235/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:13.019 [236/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:13.019 [237/707] Linking static target lib/librte_dmadev.a 00:02:13.279 [238/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:13.279 [239/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.279 [240/707] Linking target lib/librte_dmadev.so.24.0 00:02:13.539 [241/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:13.539 [242/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:13.539 [243/707] Linking static target lib/librte_efd.a 00:02:13.539 [244/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:13.800 [245/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:13.800 [246/707] Linking static target lib/librte_cryptodev.a 00:02:13.800 [247/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:13.800 [248/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.800 [249/707] Linking target lib/librte_efd.so.24.0 00:02:14.068 [250/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:14.068 [251/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:14.068 [252/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:14.068 [253/707] Linking static target lib/librte_dispatcher.a 00:02:14.335 [254/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:14.335 [255/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:14.335 [256/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:14.335 [257/707] Linking static target lib/librte_gpudev.a 00:02:14.595 [258/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:14.595 [259/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:14.595 [260/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.595 [261/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.595 [262/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:14.595 [263/707] Linking target lib/librte_cryptodev.so.24.0 00:02:14.855 [264/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:14.855 [265/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:14.855 [266/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:14.855 [267/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:14.855 [268/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.114 [269/707] Linking target lib/librte_gpudev.so.24.0 00:02:15.114 [270/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:15.114 [271/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:15.114 [272/707] Linking static target lib/librte_eventdev.a 00:02:15.114 [273/707] Linking static target lib/librte_gro.a 00:02:15.114 [274/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:15.115 [275/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:15.115 [276/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:15.115 [277/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:15.115 [278/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.374 [279/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:15.375 [280/707] Linking static target lib/librte_gso.a 00:02:15.375 [281/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.375 [282/707] Linking target lib/librte_ethdev.so.24.0 00:02:15.375 [283/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.375 [284/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:15.375 [285/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:15.375 [286/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:15.375 [287/707] Linking target lib/librte_metrics.so.24.0 00:02:15.639 [288/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:15.639 [289/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:15.639 [290/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:15.639 [291/707] Linking target lib/librte_bpf.so.24.0 00:02:15.639 [292/707] Linking target lib/librte_gro.so.24.0 00:02:15.639 [293/707] Linking static target lib/librte_jobstats.a 00:02:15.639 [294/707] Linking target lib/librte_gso.so.24.0 00:02:15.639 [295/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:15.639 [296/707] Linking target lib/librte_bitratestats.so.24.0 00:02:15.639 [297/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:15.639 [298/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:15.639 [299/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:15.639 [300/707] Linking static target lib/librte_ip_frag.a 00:02:15.900 [301/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:15.900 [302/707] Linking static target lib/librte_latencystats.a 00:02:15.900 [303/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.900 [304/707] Linking target lib/librte_jobstats.so.24.0 00:02:15.900 [305/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:15.900 [306/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.900 [307/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.900 [308/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:15.900 [309/707] Linking target lib/librte_ip_frag.so.24.0 00:02:16.160 [310/707] Linking target lib/librte_latencystats.so.24.0 00:02:16.160 [311/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:16.160 [312/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:16.160 [313/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:16.160 [314/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:16.160 [315/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:16.160 [316/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:16.160 [317/707] Linking static target lib/librte_lpm.a 00:02:16.160 [318/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:16.419 [319/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:16.419 [320/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.419 [321/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:16.420 [322/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:16.420 [323/707] Linking static target lib/librte_pcapng.a 00:02:16.679 [324/707] Linking target lib/librte_lpm.so.24.0 00:02:16.679 [325/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:16.679 [326/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:16.679 [327/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:16.679 [328/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:16.679 [329/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:16.679 [330/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.679 [331/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.679 [332/707] Linking target lib/librte_pcapng.so.24.0 00:02:16.940 [333/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:16.940 [334/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:16.940 [335/707] Linking target lib/librte_eventdev.so.24.0 00:02:16.940 [336/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:16.940 [337/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:16.940 [338/707] Linking target lib/librte_dispatcher.so.24.0 00:02:16.940 [339/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:16.940 [340/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:16.940 [341/707] Linking static target lib/librte_power.a 00:02:17.200 [342/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:17.200 [343/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:17.200 [344/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:17.200 [345/707] Linking static target lib/librte_regexdev.a 00:02:17.200 [346/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:17.200 [347/707] Linking static target lib/librte_rawdev.a 00:02:17.200 [348/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:17.460 [349/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:17.460 [350/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:17.460 [351/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:17.460 [352/707] Linking static target lib/librte_mldev.a 00:02:17.460 [353/707] Linking static target lib/librte_member.a 00:02:17.460 [354/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:17.720 [355/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.720 [356/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:17.720 [357/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:17.720 [358/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.720 [359/707] Linking static target lib/librte_reorder.a 00:02:17.720 [360/707] Linking target lib/librte_power.so.24.0 00:02:17.720 [361/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:17.720 [362/707] Linking target lib/librte_rawdev.so.24.0 00:02:17.720 [363/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.720 [364/707] Linking target lib/librte_member.so.24.0 00:02:17.720 [365/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.980 [366/707] Linking target lib/librte_regexdev.so.24.0 00:02:17.980 [367/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:17.980 [368/707] Linking static target lib/librte_rib.a 00:02:17.980 [369/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.980 [370/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:17.980 [371/707] Linking target lib/librte_reorder.so.24.0 00:02:17.980 [372/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:17.980 [373/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:17.980 [374/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:17.980 [375/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:17.980 [376/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:17.980 [377/707] Linking static target lib/librte_stack.a 00:02:18.239 [378/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:18.239 [379/707] Linking static target lib/librte_security.a 00:02:18.239 [380/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.240 [381/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.240 [382/707] Linking target lib/librte_stack.so.24.0 00:02:18.240 [383/707] Linking target lib/librte_rib.so.24.0 00:02:18.499 [384/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:18.499 [385/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:18.500 [386/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:18.500 [387/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.500 [388/707] Linking target lib/librte_mldev.so.24.0 00:02:18.500 [389/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.759 [390/707] Linking target lib/librte_security.so.24.0 00:02:18.759 [391/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:18.759 [392/707] Linking static target lib/librte_sched.a 00:02:18.759 [393/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:18.759 [394/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:18.759 [395/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:19.019 [396/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.020 [397/707] Linking target lib/librte_sched.so.24.0 00:02:19.020 [398/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:19.020 [399/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:19.020 [400/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:19.279 [401/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:19.279 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:19.279 [403/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:19.539 [404/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:19.539 [405/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:19.539 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:19.799 [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:19.800 [408/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:19.800 [409/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:20.060 [410/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:20.060 [411/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:20.060 [412/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:20.060 [413/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:20.060 [414/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:20.060 [415/707] Linking static target lib/librte_ipsec.a 00:02:20.330 [416/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.330 [417/707] Linking target lib/librte_ipsec.so.24.0 00:02:20.592 [418/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:20.592 [419/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:20.592 [420/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:20.592 [421/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:20.592 [422/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:20.592 [423/707] Linking static target lib/librte_fib.a 00:02:20.883 [424/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:20.883 [425/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:20.883 [426/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.883 [427/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:20.883 [428/707] Linking target lib/librte_fib.so.24.0 00:02:20.883 [429/707] Linking static target lib/librte_pdcp.a 00:02:20.883 [430/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:21.143 [431/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:21.143 [432/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:21.143 [433/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.143 [434/707] Linking target lib/librte_pdcp.so.24.0 00:02:21.403 [435/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:21.403 [436/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:21.663 [437/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:21.663 [438/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:21.663 [439/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:21.663 [440/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:21.923 [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:21.923 [442/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:21.923 [443/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:21.923 [444/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:21.923 [445/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:22.183 [446/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:22.183 [447/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:22.183 [448/707] Linking static target lib/librte_port.a 00:02:22.183 [449/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:22.183 [450/707] Linking static target lib/librte_pdump.a 00:02:22.442 [451/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:22.443 [452/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:22.443 [453/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:22.443 [454/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.443 [455/707] Linking target lib/librte_pdump.so.24.0 00:02:22.702 [456/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.702 [457/707] Linking target lib/librte_port.so.24.0 00:02:22.702 [458/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:22.702 [459/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:22.702 [460/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:22.702 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:22.702 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:22.962 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:22.962 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:22.962 [465/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:23.223 [466/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:23.223 [467/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:23.223 [468/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:23.223 [469/707] Linking static target lib/librte_table.a 00:02:23.482 [470/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:23.482 [471/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:23.742 [472/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:23.742 [473/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.742 [474/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:24.002 [475/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:24.002 [476/707] Linking target lib/librte_table.so.24.0 00:02:24.002 [477/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:24.002 [478/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:24.262 [479/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:24.262 [480/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:24.262 [481/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:24.262 [482/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:24.262 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:24.521 [484/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:24.521 [485/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:24.521 [486/707] Linking static target lib/librte_graph.a 00:02:24.780 [487/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:24.780 [488/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:24.780 [489/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:24.780 [490/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:25.040 [491/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.040 [492/707] Linking target lib/librte_graph.so.24.0 00:02:25.040 [493/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:25.299 [494/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:25.299 [495/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:25.299 [496/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:25.299 [497/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:25.559 [498/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:25.559 [499/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:25.559 [500/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:25.559 [501/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:25.559 [502/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:25.559 [503/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:25.819 [504/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:25.819 [505/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:25.819 [506/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:26.079 [507/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:26.079 [508/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:26.079 [509/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:26.079 [510/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:26.079 [511/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:26.079 [512/707] Linking static target lib/librte_node.a 00:02:26.339 [513/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:26.339 [514/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:26.339 [515/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:26.339 [516/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:26.339 [517/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.339 [518/707] Linking target lib/librte_node.so.24.0 00:02:26.599 [519/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:26.599 [520/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:26.599 [521/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:26.599 [522/707] Linking static target drivers/librte_bus_vdev.a 00:02:26.599 [523/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:26.599 [524/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:26.599 [525/707] Linking static target drivers/librte_bus_pci.a 00:02:26.599 [526/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:26.599 [527/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:26.599 [528/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:26.599 [529/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.859 [530/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:26.859 [531/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:26.859 [532/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:26.859 [533/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:26.859 [534/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:26.859 [535/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.859 [536/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:27.119 [537/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:27.119 [538/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:27.119 [539/707] Linking static target drivers/librte_mempool_ring.a 00:02:27.119 [540/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:27.119 [541/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:27.119 [542/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:27.119 [543/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:27.379 [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:27.647 [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:27.647 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:27.647 [547/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:28.252 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:28.252 [549/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:28.512 [550/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:28.512 [551/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:28.512 [552/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:28.512 [553/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:28.512 [554/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:28.771 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:28.771 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:28.771 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:29.031 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:29.031 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:29.031 [560/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:29.291 [561/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:29.291 [562/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:29.550 [563/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:29.550 [564/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:29.810 [565/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:29.810 [566/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:29.810 [567/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:30.070 [568/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:30.070 [569/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:30.070 [570/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:30.070 [571/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:30.070 [572/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:30.330 [573/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:30.330 [574/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:30.330 [575/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:30.590 [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:30.590 [577/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:30.590 [578/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:30.590 [579/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:30.850 [580/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:30.850 [581/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:30.850 [582/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:31.110 [583/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:31.110 [584/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:31.110 [585/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:31.110 [586/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:31.110 [587/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:31.110 [588/707] Linking static target drivers/librte_net_i40e.a 00:02:31.110 [589/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:31.369 [590/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:31.629 [591/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:31.629 [592/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:31.629 [593/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.629 [594/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:31.629 [595/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:31.889 [596/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:31.889 [597/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:31.889 [598/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:31.889 [599/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:31.889 [600/707] Linking static target lib/librte_vhost.a 00:02:31.889 [601/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:32.149 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:32.149 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:32.409 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:32.409 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:32.409 [606/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:32.409 [607/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:32.409 [608/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:32.669 [609/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:32.669 [610/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:32.669 [611/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:32.669 [612/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:32.669 [613/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:32.929 [614/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.929 [615/707] Linking target lib/librte_vhost.so.24.0 00:02:32.929 [616/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:32.929 [617/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:33.189 [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:33.189 [619/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:33.189 [620/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:33.759 [621/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:33.759 [622/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:33.759 [623/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:34.018 [624/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:34.018 [625/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:34.018 [626/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:34.018 [627/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:34.018 [628/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:34.277 [629/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:34.277 [630/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:34.277 [631/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:34.277 [632/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:34.277 [633/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:34.277 [634/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:34.537 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:34.537 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:34.537 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:34.797 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:34.797 [639/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:34.797 [640/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:34.797 [641/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:34.797 [642/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:35.057 [643/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:35.057 [644/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:35.057 [645/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:35.057 [646/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:35.317 [647/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:35.317 [648/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:35.317 [649/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:35.317 [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:35.577 [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:35.577 [652/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:35.577 [653/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:35.837 [654/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:35.837 [655/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:36.097 [656/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:36.097 [657/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:36.097 [658/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:36.097 [659/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:36.097 [660/707] Linking static target lib/librte_pipeline.a 00:02:36.097 [661/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:36.357 [662/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:36.617 [663/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:36.617 [664/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:36.617 [665/707] Linking target app/dpdk-dumpcap 00:02:36.617 [666/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:36.617 [667/707] Linking target app/dpdk-graph 00:02:36.877 [668/707] Linking target app/dpdk-pdump 00:02:36.877 [669/707] Linking target app/dpdk-proc-info 00:02:36.877 [670/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:36.877 [671/707] Linking target app/dpdk-test-acl 00:02:37.136 [672/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:37.137 [673/707] Linking target app/dpdk-test-bbdev 00:02:37.137 [674/707] Linking target app/dpdk-test-cmdline 00:02:37.396 [675/707] Linking target app/dpdk-test-compress-perf 00:02:37.396 [676/707] Linking target app/dpdk-test-crypto-perf 00:02:37.396 [677/707] Linking target app/dpdk-test-fib 00:02:37.396 [678/707] Linking target app/dpdk-test-dma-perf 00:02:37.396 [679/707] Linking target app/dpdk-test-eventdev 00:02:37.396 [680/707] Linking target app/dpdk-test-flow-perf 00:02:37.656 [681/707] Linking target app/dpdk-test-gpudev 00:02:37.657 [682/707] Linking target app/dpdk-test-pipeline 00:02:37.657 [683/707] Linking target app/dpdk-test-mldev 00:02:37.916 [684/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:37.916 [685/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:37.916 [686/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:38.177 [687/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:38.177 [688/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:38.177 [689/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:38.177 [690/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:38.437 [691/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.437 [692/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:38.437 [693/707] Linking target lib/librte_pipeline.so.24.0 00:02:38.437 [694/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:38.697 [695/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:38.697 [696/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:38.697 [697/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:38.957 [698/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:38.957 [699/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:38.957 [700/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:39.217 [701/707] Linking target app/dpdk-test-sad 00:02:39.217 [702/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:39.217 [703/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:39.217 [704/707] Linking target app/dpdk-test-regex 00:02:39.217 [705/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:39.787 [706/707] Linking target app/dpdk-testpmd 00:02:39.787 [707/707] Linking target app/dpdk-test-security-perf 00:02:39.787 04:51:50 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:39.787 04:51:50 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:39.787 04:51:50 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:39.787 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:39.787 [0/1] Installing files. 00:02:40.048 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.048 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.049 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.050 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.051 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.051 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:40.051 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.312 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.313 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.314 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.314 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.314 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.314 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.314 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.314 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.314 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.314 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.314 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.314 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.314 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.314 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:40.314 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:40.314 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:40.314 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:40.314 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.314 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.575 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.575 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.575 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.575 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:40.575 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.575 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:40.575 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.575 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:40.575 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.575 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:40.575 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.575 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.575 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.575 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.575 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.575 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.575 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.575 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.575 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.575 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.575 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.575 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.575 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.575 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.575 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.575 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.575 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.575 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.575 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.575 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.575 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.576 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.838 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.839 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:40.840 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:40.840 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:02:40.840 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:02:40.840 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:02:40.840 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:40.840 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:02:40.840 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:40.840 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:02:40.840 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:40.840 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:02:40.840 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:40.840 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:02:40.840 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:40.840 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:02:40.840 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:40.840 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:02:40.840 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:40.840 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:02:40.840 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:40.840 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:02:40.840 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:40.840 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:02:40.840 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:40.840 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:02:40.840 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:40.840 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:02:40.840 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:40.840 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:02:40.840 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:40.840 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:02:40.840 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:40.840 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:02:40.840 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:40.840 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:02:40.840 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:40.840 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:02:40.840 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:40.840 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:02:40.840 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:40.840 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:02:40.840 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:40.840 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:02:40.840 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:40.840 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:02:40.840 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:40.840 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:02:40.840 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:40.840 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:02:40.840 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:40.840 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:02:40.840 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:40.840 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:02:40.840 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:40.840 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:02:40.840 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:40.840 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:02:40.840 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:02:40.840 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:02:40.840 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:40.840 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:02:40.840 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:40.840 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:02:40.840 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:40.840 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:02:40.840 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:40.840 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:02:40.840 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:40.840 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:02:40.840 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:40.840 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:02:40.840 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:40.841 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:02:40.841 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:40.841 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:02:40.841 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:40.841 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:02:40.841 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:40.841 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:02:40.841 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:40.841 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:02:40.841 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:40.841 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:02:40.841 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:02:40.841 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:02:40.841 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:40.841 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:02:40.841 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:40.841 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:02:40.841 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:40.841 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:02:40.841 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:40.841 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:40.841 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:40.841 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:40.841 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:40.841 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:40.841 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:40.841 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:40.841 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:40.841 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:40.841 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:40.841 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:40.841 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:40.841 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:02:40.841 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:40.841 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:02:40.841 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:40.841 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:02:40.841 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:40.841 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:02:40.841 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:02:40.841 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:02:40.841 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:40.841 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:02:40.841 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:40.841 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:02:40.841 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:40.841 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:02:40.841 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:40.841 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:02:40.841 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:40.841 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:02:40.841 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:40.841 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:02:40.841 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:40.841 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:40.841 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:40.841 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:40.841 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:40.841 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:40.841 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:40.841 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:40.841 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:40.841 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:40.841 04:51:51 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:40.841 04:51:51 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:40.841 00:02:40.841 real 0m44.408s 00:02:40.841 user 4m54.409s 00:02:40.841 sys 0m54.962s 00:02:40.841 04:51:51 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:40.841 04:51:51 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:40.841 ************************************ 00:02:40.841 END TEST build_native_dpdk 00:02:40.841 ************************************ 00:02:40.841 04:51:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:40.841 04:51:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:40.841 04:51:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:40.841 04:51:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:40.841 04:51:51 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:40.841 04:51:51 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:40.841 04:51:51 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:40.841 04:51:51 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:02:41.100 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:41.359 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.359 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:41.359 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:41.618 Using 'verbs' RDMA provider 00:02:57.473 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:15.591 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:15.591 Creating mk/config.mk...done. 00:03:15.591 Creating mk/cc.flags.mk...done. 00:03:15.591 Type 'make' to build. 00:03:15.591 04:52:24 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:15.591 04:52:24 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:15.591 04:52:24 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:15.591 04:52:24 -- common/autotest_common.sh@10 -- $ set +x 00:03:15.591 ************************************ 00:03:15.591 START TEST make 00:03:15.591 ************************************ 00:03:15.591 04:52:24 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:15.591 make[1]: Nothing to be done for 'all'. 00:04:02.297 CC lib/ut_mock/mock.o 00:04:02.297 CC lib/log/log_flags.o 00:04:02.297 CC lib/log/log.o 00:04:02.297 CC lib/log/log_deprecated.o 00:04:02.297 CC lib/ut/ut.o 00:04:02.297 LIB libspdk_log.a 00:04:02.297 LIB libspdk_ut_mock.a 00:04:02.297 LIB libspdk_ut.a 00:04:02.297 SO libspdk_ut_mock.so.6.0 00:04:02.297 SO libspdk_log.so.7.0 00:04:02.297 SO libspdk_ut.so.2.0 00:04:02.297 SYMLINK libspdk_ut_mock.so 00:04:02.297 SYMLINK libspdk_ut.so 00:04:02.297 SYMLINK libspdk_log.so 00:04:02.297 CC lib/ioat/ioat.o 00:04:02.297 CXX lib/trace_parser/trace.o 00:04:02.297 CC lib/dma/dma.o 00:04:02.297 CC lib/util/base64.o 00:04:02.297 CC lib/util/bit_array.o 00:04:02.297 CC lib/util/cpuset.o 00:04:02.297 CC lib/util/crc16.o 00:04:02.297 CC lib/util/crc32c.o 00:04:02.297 CC lib/util/crc32.o 00:04:02.297 CC lib/vfio_user/host/vfio_user_pci.o 00:04:02.297 CC lib/util/crc32_ieee.o 00:04:02.297 CC lib/util/crc64.o 00:04:02.297 CC lib/util/dif.o 00:04:02.297 CC lib/util/fd.o 00:04:02.297 LIB libspdk_dma.a 00:04:02.297 CC lib/util/fd_group.o 00:04:02.297 CC lib/util/file.o 00:04:02.297 SO libspdk_dma.so.5.0 00:04:02.297 LIB libspdk_ioat.a 00:04:02.297 CC lib/vfio_user/host/vfio_user.o 00:04:02.297 CC lib/util/hexlify.o 00:04:02.297 SO libspdk_ioat.so.7.0 00:04:02.297 SYMLINK libspdk_dma.so 00:04:02.297 CC lib/util/iov.o 00:04:02.297 CC lib/util/math.o 00:04:02.297 SYMLINK libspdk_ioat.so 00:04:02.297 CC lib/util/net.o 00:04:02.297 CC lib/util/pipe.o 00:04:02.297 CC lib/util/strerror_tls.o 00:04:02.297 CC lib/util/string.o 00:04:02.297 CC lib/util/uuid.o 00:04:02.297 LIB libspdk_vfio_user.a 00:04:02.297 CC lib/util/xor.o 00:04:02.297 CC lib/util/zipf.o 00:04:02.297 SO libspdk_vfio_user.so.5.0 00:04:02.297 CC lib/util/md5.o 00:04:02.297 SYMLINK libspdk_vfio_user.so 00:04:02.297 LIB libspdk_util.a 00:04:02.297 SO libspdk_util.so.10.0 00:04:02.297 LIB libspdk_trace_parser.a 00:04:02.297 SO libspdk_trace_parser.so.6.0 00:04:02.297 SYMLINK libspdk_util.so 00:04:02.297 SYMLINK libspdk_trace_parser.so 00:04:02.297 CC lib/rdma_provider/common.o 00:04:02.297 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:02.297 CC lib/vmd/vmd.o 00:04:02.297 CC lib/vmd/led.o 00:04:02.297 CC lib/rdma_utils/rdma_utils.o 00:04:02.297 CC lib/env_dpdk/env.o 00:04:02.297 CC lib/conf/conf.o 00:04:02.297 CC lib/env_dpdk/memory.o 00:04:02.297 CC lib/idxd/idxd.o 00:04:02.297 CC lib/json/json_parse.o 00:04:02.297 CC lib/json/json_util.o 00:04:02.297 CC lib/json/json_write.o 00:04:02.297 LIB libspdk_rdma_provider.a 00:04:02.297 LIB libspdk_conf.a 00:04:02.297 SO libspdk_rdma_provider.so.6.0 00:04:02.297 SO libspdk_conf.so.6.0 00:04:02.297 SYMLINK libspdk_rdma_provider.so 00:04:02.297 CC lib/idxd/idxd_user.o 00:04:02.297 SYMLINK libspdk_conf.so 00:04:02.297 LIB libspdk_rdma_utils.a 00:04:02.297 CC lib/idxd/idxd_kernel.o 00:04:02.297 CC lib/env_dpdk/pci.o 00:04:02.297 SO libspdk_rdma_utils.so.1.0 00:04:02.297 CC lib/env_dpdk/init.o 00:04:02.297 SYMLINK libspdk_rdma_utils.so 00:04:02.297 CC lib/env_dpdk/threads.o 00:04:02.297 LIB libspdk_json.a 00:04:02.297 CC lib/env_dpdk/pci_ioat.o 00:04:02.297 SO libspdk_json.so.6.0 00:04:02.297 CC lib/env_dpdk/pci_virtio.o 00:04:02.297 CC lib/env_dpdk/pci_vmd.o 00:04:02.297 SYMLINK libspdk_json.so 00:04:02.297 CC lib/env_dpdk/pci_idxd.o 00:04:02.297 CC lib/env_dpdk/pci_event.o 00:04:02.297 CC lib/env_dpdk/sigbus_handler.o 00:04:02.297 CC lib/env_dpdk/pci_dpdk.o 00:04:02.297 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:02.297 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:02.297 LIB libspdk_idxd.a 00:04:02.297 LIB libspdk_vmd.a 00:04:02.297 SO libspdk_idxd.so.12.1 00:04:02.297 SO libspdk_vmd.so.6.0 00:04:02.297 SYMLINK libspdk_idxd.so 00:04:02.297 SYMLINK libspdk_vmd.so 00:04:02.297 CC lib/jsonrpc/jsonrpc_server.o 00:04:02.297 CC lib/jsonrpc/jsonrpc_client.o 00:04:02.297 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:02.297 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:02.297 LIB libspdk_jsonrpc.a 00:04:02.297 SO libspdk_jsonrpc.so.6.0 00:04:02.297 SYMLINK libspdk_jsonrpc.so 00:04:02.297 LIB libspdk_env_dpdk.a 00:04:02.297 SO libspdk_env_dpdk.so.15.0 00:04:02.297 CC lib/rpc/rpc.o 00:04:02.297 SYMLINK libspdk_env_dpdk.so 00:04:02.297 LIB libspdk_rpc.a 00:04:02.297 SO libspdk_rpc.so.6.0 00:04:02.297 SYMLINK libspdk_rpc.so 00:04:02.297 CC lib/keyring/keyring_rpc.o 00:04:02.297 CC lib/keyring/keyring.o 00:04:02.297 CC lib/trace/trace.o 00:04:02.297 CC lib/trace/trace_flags.o 00:04:02.297 CC lib/notify/notify_rpc.o 00:04:02.297 CC lib/trace/trace_rpc.o 00:04:02.297 CC lib/notify/notify.o 00:04:02.297 LIB libspdk_notify.a 00:04:02.297 LIB libspdk_keyring.a 00:04:02.297 LIB libspdk_trace.a 00:04:02.297 SO libspdk_notify.so.6.0 00:04:02.297 SO libspdk_keyring.so.2.0 00:04:02.297 SO libspdk_trace.so.11.0 00:04:02.297 SYMLINK libspdk_notify.so 00:04:02.297 SYMLINK libspdk_keyring.so 00:04:02.297 SYMLINK libspdk_trace.so 00:04:02.297 CC lib/thread/thread.o 00:04:02.297 CC lib/thread/iobuf.o 00:04:02.297 CC lib/sock/sock.o 00:04:02.297 CC lib/sock/sock_rpc.o 00:04:02.297 LIB libspdk_sock.a 00:04:02.297 SO libspdk_sock.so.10.0 00:04:02.297 SYMLINK libspdk_sock.so 00:04:02.297 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:02.297 CC lib/nvme/nvme_ctrlr.o 00:04:02.297 CC lib/nvme/nvme_fabric.o 00:04:02.297 CC lib/nvme/nvme_ns_cmd.o 00:04:02.297 CC lib/nvme/nvme_ns.o 00:04:02.297 CC lib/nvme/nvme_pcie_common.o 00:04:02.297 CC lib/nvme/nvme_pcie.o 00:04:02.297 CC lib/nvme/nvme.o 00:04:02.297 CC lib/nvme/nvme_qpair.o 00:04:02.297 LIB libspdk_thread.a 00:04:02.297 SO libspdk_thread.so.10.1 00:04:02.298 CC lib/nvme/nvme_quirks.o 00:04:02.298 CC lib/nvme/nvme_transport.o 00:04:02.298 SYMLINK libspdk_thread.so 00:04:02.298 CC lib/nvme/nvme_discovery.o 00:04:02.298 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:02.298 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:02.298 CC lib/nvme/nvme_tcp.o 00:04:02.298 CC lib/nvme/nvme_opal.o 00:04:02.298 CC lib/nvme/nvme_io_msg.o 00:04:02.298 CC lib/nvme/nvme_poll_group.o 00:04:02.298 CC lib/nvme/nvme_zns.o 00:04:02.557 CC lib/accel/accel.o 00:04:02.557 CC lib/blob/blobstore.o 00:04:02.557 CC lib/init/json_config.o 00:04:02.557 CC lib/accel/accel_rpc.o 00:04:02.557 CC lib/virtio/virtio.o 00:04:02.557 CC lib/fsdev/fsdev.o 00:04:02.817 CC lib/accel/accel_sw.o 00:04:02.817 CC lib/blob/request.o 00:04:02.817 CC lib/init/subsystem.o 00:04:02.817 CC lib/blob/zeroes.o 00:04:03.076 CC lib/init/subsystem_rpc.o 00:04:03.076 CC lib/virtio/virtio_vhost_user.o 00:04:03.076 CC lib/init/rpc.o 00:04:03.076 CC lib/blob/blob_bs_dev.o 00:04:03.076 CC lib/nvme/nvme_stubs.o 00:04:03.076 CC lib/nvme/nvme_auth.o 00:04:03.076 LIB libspdk_init.a 00:04:03.335 SO libspdk_init.so.6.0 00:04:03.335 SYMLINK libspdk_init.so 00:04:03.335 CC lib/fsdev/fsdev_io.o 00:04:03.335 CC lib/nvme/nvme_cuse.o 00:04:03.335 CC lib/nvme/nvme_rdma.o 00:04:03.335 CC lib/virtio/virtio_vfio_user.o 00:04:03.594 CC lib/event/app.o 00:04:03.594 CC lib/event/reactor.o 00:04:03.594 CC lib/virtio/virtio_pci.o 00:04:03.594 CC lib/event/log_rpc.o 00:04:03.594 LIB libspdk_accel.a 00:04:03.594 SO libspdk_accel.so.16.0 00:04:03.594 CC lib/fsdev/fsdev_rpc.o 00:04:03.594 SYMLINK libspdk_accel.so 00:04:03.594 CC lib/event/app_rpc.o 00:04:03.853 CC lib/event/scheduler_static.o 00:04:03.853 LIB libspdk_fsdev.a 00:04:03.853 SO libspdk_fsdev.so.1.0 00:04:03.853 LIB libspdk_virtio.a 00:04:03.853 SYMLINK libspdk_fsdev.so 00:04:03.853 SO libspdk_virtio.so.7.0 00:04:03.853 SYMLINK libspdk_virtio.so 00:04:03.853 CC lib/bdev/bdev.o 00:04:03.853 CC lib/bdev/bdev_rpc.o 00:04:03.853 CC lib/bdev/bdev_zone.o 00:04:04.111 CC lib/bdev/part.o 00:04:04.112 LIB libspdk_event.a 00:04:04.112 SO libspdk_event.so.14.0 00:04:04.112 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:04.112 CC lib/bdev/scsi_nvme.o 00:04:04.112 SYMLINK libspdk_event.so 00:04:04.679 LIB libspdk_nvme.a 00:04:04.679 LIB libspdk_fuse_dispatcher.a 00:04:04.679 SO libspdk_fuse_dispatcher.so.1.0 00:04:04.939 SYMLINK libspdk_fuse_dispatcher.so 00:04:04.939 SO libspdk_nvme.so.14.0 00:04:05.199 SYMLINK libspdk_nvme.so 00:04:05.768 LIB libspdk_blob.a 00:04:06.027 SO libspdk_blob.so.11.0 00:04:06.027 SYMLINK libspdk_blob.so 00:04:06.287 CC lib/blobfs/blobfs.o 00:04:06.287 CC lib/blobfs/tree.o 00:04:06.287 CC lib/lvol/lvol.o 00:04:06.546 LIB libspdk_bdev.a 00:04:06.806 SO libspdk_bdev.so.16.0 00:04:06.806 SYMLINK libspdk_bdev.so 00:04:07.065 CC lib/nvmf/ctrlr.o 00:04:07.065 CC lib/nvmf/ctrlr_bdev.o 00:04:07.065 CC lib/nbd/nbd_rpc.o 00:04:07.065 CC lib/nbd/nbd.o 00:04:07.065 CC lib/nvmf/ctrlr_discovery.o 00:04:07.065 CC lib/ftl/ftl_core.o 00:04:07.065 CC lib/scsi/dev.o 00:04:07.065 CC lib/ublk/ublk.o 00:04:07.324 LIB libspdk_blobfs.a 00:04:07.324 CC lib/ublk/ublk_rpc.o 00:04:07.324 SO libspdk_blobfs.so.10.0 00:04:07.324 CC lib/scsi/lun.o 00:04:07.324 SYMLINK libspdk_blobfs.so 00:04:07.324 CC lib/scsi/port.o 00:04:07.324 LIB libspdk_lvol.a 00:04:07.324 SO libspdk_lvol.so.10.0 00:04:07.324 CC lib/ftl/ftl_init.o 00:04:07.324 SYMLINK libspdk_lvol.so 00:04:07.324 CC lib/ftl/ftl_layout.o 00:04:07.324 CC lib/ftl/ftl_debug.o 00:04:07.583 CC lib/nvmf/subsystem.o 00:04:07.583 LIB libspdk_nbd.a 00:04:07.583 SO libspdk_nbd.so.7.0 00:04:07.583 CC lib/nvmf/nvmf.o 00:04:07.583 SYMLINK libspdk_nbd.so 00:04:07.583 CC lib/ftl/ftl_io.o 00:04:07.583 CC lib/scsi/scsi.o 00:04:07.583 CC lib/ftl/ftl_sb.o 00:04:07.583 CC lib/ftl/ftl_l2p.o 00:04:07.842 CC lib/scsi/scsi_bdev.o 00:04:07.842 LIB libspdk_ublk.a 00:04:07.842 CC lib/ftl/ftl_l2p_flat.o 00:04:07.842 SO libspdk_ublk.so.3.0 00:04:07.842 CC lib/nvmf/nvmf_rpc.o 00:04:07.842 CC lib/nvmf/transport.o 00:04:07.842 SYMLINK libspdk_ublk.so 00:04:07.842 CC lib/nvmf/tcp.o 00:04:07.842 CC lib/nvmf/stubs.o 00:04:07.842 CC lib/scsi/scsi_pr.o 00:04:07.842 CC lib/ftl/ftl_nv_cache.o 00:04:08.101 CC lib/ftl/ftl_band.o 00:04:08.101 CC lib/scsi/scsi_rpc.o 00:04:08.360 CC lib/nvmf/mdns_server.o 00:04:08.360 CC lib/scsi/task.o 00:04:08.360 CC lib/nvmf/rdma.o 00:04:08.619 CC lib/nvmf/auth.o 00:04:08.619 LIB libspdk_scsi.a 00:04:08.619 CC lib/ftl/ftl_band_ops.o 00:04:08.619 SO libspdk_scsi.so.9.0 00:04:08.619 CC lib/ftl/ftl_writer.o 00:04:08.619 SYMLINK libspdk_scsi.so 00:04:08.619 CC lib/ftl/ftl_rq.o 00:04:08.619 CC lib/ftl/ftl_reloc.o 00:04:08.619 CC lib/ftl/ftl_l2p_cache.o 00:04:08.878 CC lib/ftl/ftl_p2l.o 00:04:08.878 CC lib/ftl/ftl_p2l_log.o 00:04:08.878 CC lib/ftl/mngt/ftl_mngt.o 00:04:08.878 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:08.878 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:09.137 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:09.137 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:09.137 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:09.137 CC lib/iscsi/conn.o 00:04:09.137 CC lib/iscsi/init_grp.o 00:04:09.396 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:09.396 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:09.396 CC lib/vhost/vhost.o 00:04:09.396 CC lib/vhost/vhost_rpc.o 00:04:09.396 CC lib/iscsi/iscsi.o 00:04:09.396 CC lib/iscsi/param.o 00:04:09.396 CC lib/iscsi/portal_grp.o 00:04:09.396 CC lib/vhost/vhost_scsi.o 00:04:09.396 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:09.396 CC lib/iscsi/tgt_node.o 00:04:09.655 CC lib/iscsi/iscsi_subsystem.o 00:04:09.655 CC lib/vhost/vhost_blk.o 00:04:09.915 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:09.915 CC lib/iscsi/iscsi_rpc.o 00:04:09.915 CC lib/vhost/rte_vhost_user.o 00:04:09.915 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:09.915 CC lib/iscsi/task.o 00:04:09.915 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:10.174 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:10.174 CC lib/ftl/utils/ftl_conf.o 00:04:10.174 CC lib/ftl/utils/ftl_md.o 00:04:10.174 CC lib/ftl/utils/ftl_mempool.o 00:04:10.174 CC lib/ftl/utils/ftl_bitmap.o 00:04:10.174 CC lib/ftl/utils/ftl_property.o 00:04:10.433 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:10.433 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:10.433 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:10.433 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:10.433 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:10.433 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:10.692 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:10.692 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:10.692 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:10.692 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:10.692 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:10.692 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:10.692 LIB libspdk_nvmf.a 00:04:10.692 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:10.692 CC lib/ftl/base/ftl_base_dev.o 00:04:10.692 CC lib/ftl/base/ftl_base_bdev.o 00:04:10.692 CC lib/ftl/ftl_trace.o 00:04:10.950 SO libspdk_nvmf.so.19.0 00:04:10.950 LIB libspdk_iscsi.a 00:04:10.950 LIB libspdk_vhost.a 00:04:10.950 SO libspdk_iscsi.so.8.0 00:04:10.950 SO libspdk_vhost.so.8.0 00:04:10.950 LIB libspdk_ftl.a 00:04:10.950 SYMLINK libspdk_vhost.so 00:04:10.950 SYMLINK libspdk_nvmf.so 00:04:11.210 SYMLINK libspdk_iscsi.so 00:04:11.210 SO libspdk_ftl.so.9.0 00:04:11.470 SYMLINK libspdk_ftl.so 00:04:12.038 CC module/env_dpdk/env_dpdk_rpc.o 00:04:12.038 CC module/sock/posix/posix.o 00:04:12.038 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:12.038 CC module/keyring/linux/keyring.o 00:04:12.038 CC module/keyring/file/keyring.o 00:04:12.038 CC module/accel/error/accel_error.o 00:04:12.038 CC module/accel/ioat/accel_ioat.o 00:04:12.038 CC module/blob/bdev/blob_bdev.o 00:04:12.038 CC module/accel/dsa/accel_dsa.o 00:04:12.038 CC module/fsdev/aio/fsdev_aio.o 00:04:12.038 LIB libspdk_env_dpdk_rpc.a 00:04:12.038 SO libspdk_env_dpdk_rpc.so.6.0 00:04:12.038 CC module/keyring/linux/keyring_rpc.o 00:04:12.038 SYMLINK libspdk_env_dpdk_rpc.so 00:04:12.038 CC module/accel/error/accel_error_rpc.o 00:04:12.038 CC module/keyring/file/keyring_rpc.o 00:04:12.298 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:12.298 CC module/accel/ioat/accel_ioat_rpc.o 00:04:12.298 LIB libspdk_scheduler_dynamic.a 00:04:12.298 SO libspdk_scheduler_dynamic.so.4.0 00:04:12.298 LIB libspdk_keyring_linux.a 00:04:12.298 LIB libspdk_accel_error.a 00:04:12.298 LIB libspdk_keyring_file.a 00:04:12.298 SYMLINK libspdk_scheduler_dynamic.so 00:04:12.298 LIB libspdk_blob_bdev.a 00:04:12.298 SO libspdk_keyring_linux.so.1.0 00:04:12.298 CC module/accel/dsa/accel_dsa_rpc.o 00:04:12.298 SO libspdk_accel_error.so.2.0 00:04:12.298 SO libspdk_blob_bdev.so.11.0 00:04:12.298 SO libspdk_keyring_file.so.2.0 00:04:12.298 LIB libspdk_accel_ioat.a 00:04:12.298 CC module/fsdev/aio/linux_aio_mgr.o 00:04:12.298 SYMLINK libspdk_keyring_linux.so 00:04:12.298 SYMLINK libspdk_accel_error.so 00:04:12.298 SYMLINK libspdk_blob_bdev.so 00:04:12.298 SYMLINK libspdk_keyring_file.so 00:04:12.298 SO libspdk_accel_ioat.so.6.0 00:04:12.298 LIB libspdk_accel_dsa.a 00:04:12.557 SYMLINK libspdk_accel_ioat.so 00:04:12.557 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:12.557 SO libspdk_accel_dsa.so.5.0 00:04:12.557 SYMLINK libspdk_accel_dsa.so 00:04:12.557 CC module/scheduler/gscheduler/gscheduler.o 00:04:12.557 CC module/accel/iaa/accel_iaa.o 00:04:12.557 LIB libspdk_scheduler_dpdk_governor.a 00:04:12.557 CC module/bdev/delay/vbdev_delay.o 00:04:12.557 CC module/bdev/error/vbdev_error.o 00:04:12.557 CC module/blobfs/bdev/blobfs_bdev.o 00:04:12.557 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:12.557 LIB libspdk_fsdev_aio.a 00:04:12.557 CC module/bdev/gpt/gpt.o 00:04:12.557 CC module/bdev/lvol/vbdev_lvol.o 00:04:12.557 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:12.557 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:12.557 LIB libspdk_scheduler_gscheduler.a 00:04:12.817 SO libspdk_fsdev_aio.so.1.0 00:04:12.817 SO libspdk_scheduler_gscheduler.so.4.0 00:04:12.817 CC module/accel/iaa/accel_iaa_rpc.o 00:04:12.817 SYMLINK libspdk_scheduler_gscheduler.so 00:04:12.817 SYMLINK libspdk_fsdev_aio.so 00:04:12.817 CC module/bdev/error/vbdev_error_rpc.o 00:04:12.817 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:12.817 CC module/bdev/gpt/vbdev_gpt.o 00:04:12.817 LIB libspdk_sock_posix.a 00:04:12.817 SO libspdk_sock_posix.so.6.0 00:04:12.817 LIB libspdk_accel_iaa.a 00:04:12.817 SYMLINK libspdk_sock_posix.so 00:04:12.817 SO libspdk_accel_iaa.so.3.0 00:04:12.817 LIB libspdk_bdev_error.a 00:04:12.817 LIB libspdk_blobfs_bdev.a 00:04:12.817 SYMLINK libspdk_accel_iaa.so 00:04:13.076 SO libspdk_bdev_error.so.6.0 00:04:13.076 SO libspdk_blobfs_bdev.so.6.0 00:04:13.076 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:13.076 CC module/bdev/malloc/bdev_malloc.o 00:04:13.076 CC module/bdev/null/bdev_null.o 00:04:13.076 SYMLINK libspdk_blobfs_bdev.so 00:04:13.076 SYMLINK libspdk_bdev_error.so 00:04:13.076 CC module/bdev/nvme/bdev_nvme.o 00:04:13.076 LIB libspdk_bdev_gpt.a 00:04:13.076 SO libspdk_bdev_gpt.so.6.0 00:04:13.076 CC module/bdev/passthru/vbdev_passthru.o 00:04:13.076 LIB libspdk_bdev_delay.a 00:04:13.076 SYMLINK libspdk_bdev_gpt.so 00:04:13.076 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:13.076 CC module/bdev/raid/bdev_raid.o 00:04:13.076 CC module/bdev/split/vbdev_split.o 00:04:13.076 SO libspdk_bdev_delay.so.6.0 00:04:13.076 LIB libspdk_bdev_lvol.a 00:04:13.076 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:13.076 SO libspdk_bdev_lvol.so.6.0 00:04:13.336 SYMLINK libspdk_bdev_delay.so 00:04:13.336 SYMLINK libspdk_bdev_lvol.so 00:04:13.336 CC module/bdev/null/bdev_null_rpc.o 00:04:13.336 CC module/bdev/split/vbdev_split_rpc.o 00:04:13.336 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:13.336 CC module/bdev/raid/bdev_raid_rpc.o 00:04:13.336 LIB libspdk_bdev_passthru.a 00:04:13.336 CC module/bdev/aio/bdev_aio.o 00:04:13.336 SO libspdk_bdev_passthru.so.6.0 00:04:13.336 LIB libspdk_bdev_null.a 00:04:13.336 LIB libspdk_bdev_split.a 00:04:13.336 CC module/bdev/ftl/bdev_ftl.o 00:04:13.336 SO libspdk_bdev_null.so.6.0 00:04:13.336 SO libspdk_bdev_split.so.6.0 00:04:13.595 SYMLINK libspdk_bdev_passthru.so 00:04:13.595 CC module/bdev/aio/bdev_aio_rpc.o 00:04:13.595 SYMLINK libspdk_bdev_null.so 00:04:13.595 LIB libspdk_bdev_malloc.a 00:04:13.595 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:13.595 SYMLINK libspdk_bdev_split.so 00:04:13.595 CC module/bdev/raid/bdev_raid_sb.o 00:04:13.595 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:13.595 SO libspdk_bdev_malloc.so.6.0 00:04:13.595 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:13.595 SYMLINK libspdk_bdev_malloc.so 00:04:13.595 CC module/bdev/raid/raid0.o 00:04:13.595 LIB libspdk_bdev_zone_block.a 00:04:13.595 CC module/bdev/nvme/nvme_rpc.o 00:04:13.595 LIB libspdk_bdev_aio.a 00:04:13.595 SO libspdk_bdev_zone_block.so.6.0 00:04:13.855 SO libspdk_bdev_aio.so.6.0 00:04:13.855 LIB libspdk_bdev_ftl.a 00:04:13.855 SYMLINK libspdk_bdev_zone_block.so 00:04:13.855 CC module/bdev/nvme/bdev_mdns_client.o 00:04:13.855 CC module/bdev/nvme/vbdev_opal.o 00:04:13.855 CC module/bdev/iscsi/bdev_iscsi.o 00:04:13.855 SO libspdk_bdev_ftl.so.6.0 00:04:13.855 SYMLINK libspdk_bdev_aio.so 00:04:13.855 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:13.855 SYMLINK libspdk_bdev_ftl.so 00:04:13.855 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:13.855 CC module/bdev/raid/raid1.o 00:04:13.855 CC module/bdev/raid/concat.o 00:04:14.114 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:14.114 CC module/bdev/raid/raid5f.o 00:04:14.114 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:14.114 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:14.114 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:14.114 LIB libspdk_bdev_iscsi.a 00:04:14.114 SO libspdk_bdev_iscsi.so.6.0 00:04:14.377 SYMLINK libspdk_bdev_iscsi.so 00:04:14.637 LIB libspdk_bdev_raid.a 00:04:14.637 LIB libspdk_bdev_virtio.a 00:04:14.637 SO libspdk_bdev_raid.so.6.0 00:04:14.637 SO libspdk_bdev_virtio.so.6.0 00:04:14.637 SYMLINK libspdk_bdev_virtio.so 00:04:14.637 SYMLINK libspdk_bdev_raid.so 00:04:15.578 LIB libspdk_bdev_nvme.a 00:04:15.578 SO libspdk_bdev_nvme.so.7.0 00:04:15.578 SYMLINK libspdk_bdev_nvme.so 00:04:16.148 CC module/event/subsystems/iobuf/iobuf.o 00:04:16.148 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:16.148 CC module/event/subsystems/scheduler/scheduler.o 00:04:16.148 CC module/event/subsystems/fsdev/fsdev.o 00:04:16.148 CC module/event/subsystems/vmd/vmd.o 00:04:16.148 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:16.148 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:16.148 CC module/event/subsystems/sock/sock.o 00:04:16.148 CC module/event/subsystems/keyring/keyring.o 00:04:16.408 LIB libspdk_event_keyring.a 00:04:16.408 LIB libspdk_event_vhost_blk.a 00:04:16.408 LIB libspdk_event_scheduler.a 00:04:16.408 LIB libspdk_event_fsdev.a 00:04:16.408 LIB libspdk_event_sock.a 00:04:16.408 LIB libspdk_event_iobuf.a 00:04:16.408 SO libspdk_event_keyring.so.1.0 00:04:16.408 SO libspdk_event_scheduler.so.4.0 00:04:16.408 SO libspdk_event_vhost_blk.so.3.0 00:04:16.408 SO libspdk_event_fsdev.so.1.0 00:04:16.408 LIB libspdk_event_vmd.a 00:04:16.408 SO libspdk_event_sock.so.5.0 00:04:16.408 SO libspdk_event_iobuf.so.3.0 00:04:16.408 SO libspdk_event_vmd.so.6.0 00:04:16.408 SYMLINK libspdk_event_scheduler.so 00:04:16.408 SYMLINK libspdk_event_fsdev.so 00:04:16.408 SYMLINK libspdk_event_keyring.so 00:04:16.408 SYMLINK libspdk_event_vhost_blk.so 00:04:16.408 SYMLINK libspdk_event_sock.so 00:04:16.408 SYMLINK libspdk_event_iobuf.so 00:04:16.408 SYMLINK libspdk_event_vmd.so 00:04:16.668 CC module/event/subsystems/accel/accel.o 00:04:16.928 LIB libspdk_event_accel.a 00:04:16.928 SO libspdk_event_accel.so.6.0 00:04:16.928 SYMLINK libspdk_event_accel.so 00:04:17.499 CC module/event/subsystems/bdev/bdev.o 00:04:17.499 LIB libspdk_event_bdev.a 00:04:17.759 SO libspdk_event_bdev.so.6.0 00:04:17.759 SYMLINK libspdk_event_bdev.so 00:04:18.019 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:18.019 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:18.019 CC module/event/subsystems/scsi/scsi.o 00:04:18.019 CC module/event/subsystems/nbd/nbd.o 00:04:18.019 CC module/event/subsystems/ublk/ublk.o 00:04:18.279 LIB libspdk_event_ublk.a 00:04:18.279 LIB libspdk_event_nbd.a 00:04:18.279 LIB libspdk_event_scsi.a 00:04:18.279 SO libspdk_event_nbd.so.6.0 00:04:18.279 SO libspdk_event_ublk.so.3.0 00:04:18.279 SO libspdk_event_scsi.so.6.0 00:04:18.279 LIB libspdk_event_nvmf.a 00:04:18.279 SYMLINK libspdk_event_nbd.so 00:04:18.279 SYMLINK libspdk_event_ublk.so 00:04:18.279 SYMLINK libspdk_event_scsi.so 00:04:18.279 SO libspdk_event_nvmf.so.6.0 00:04:18.279 SYMLINK libspdk_event_nvmf.so 00:04:18.539 CC module/event/subsystems/iscsi/iscsi.o 00:04:18.539 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:18.799 LIB libspdk_event_vhost_scsi.a 00:04:18.799 LIB libspdk_event_iscsi.a 00:04:18.799 SO libspdk_event_vhost_scsi.so.3.0 00:04:18.799 SO libspdk_event_iscsi.so.6.0 00:04:18.799 SYMLINK libspdk_event_vhost_scsi.so 00:04:18.799 SYMLINK libspdk_event_iscsi.so 00:04:19.059 SO libspdk.so.6.0 00:04:19.059 SYMLINK libspdk.so 00:04:19.318 CXX app/trace/trace.o 00:04:19.318 CC app/trace_record/trace_record.o 00:04:19.318 TEST_HEADER include/spdk/accel.h 00:04:19.318 TEST_HEADER include/spdk/accel_module.h 00:04:19.318 TEST_HEADER include/spdk/assert.h 00:04:19.318 TEST_HEADER include/spdk/barrier.h 00:04:19.318 TEST_HEADER include/spdk/base64.h 00:04:19.318 TEST_HEADER include/spdk/bdev.h 00:04:19.318 TEST_HEADER include/spdk/bdev_module.h 00:04:19.318 TEST_HEADER include/spdk/bdev_zone.h 00:04:19.318 TEST_HEADER include/spdk/bit_array.h 00:04:19.318 TEST_HEADER include/spdk/bit_pool.h 00:04:19.318 TEST_HEADER include/spdk/blob_bdev.h 00:04:19.318 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:19.318 TEST_HEADER include/spdk/blobfs.h 00:04:19.319 TEST_HEADER include/spdk/blob.h 00:04:19.319 TEST_HEADER include/spdk/conf.h 00:04:19.319 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:19.319 TEST_HEADER include/spdk/config.h 00:04:19.319 TEST_HEADER include/spdk/cpuset.h 00:04:19.319 TEST_HEADER include/spdk/crc16.h 00:04:19.319 TEST_HEADER include/spdk/crc32.h 00:04:19.319 CC app/nvmf_tgt/nvmf_main.o 00:04:19.319 TEST_HEADER include/spdk/crc64.h 00:04:19.319 TEST_HEADER include/spdk/dif.h 00:04:19.319 TEST_HEADER include/spdk/dma.h 00:04:19.319 TEST_HEADER include/spdk/endian.h 00:04:19.319 TEST_HEADER include/spdk/env_dpdk.h 00:04:19.319 TEST_HEADER include/spdk/env.h 00:04:19.319 TEST_HEADER include/spdk/event.h 00:04:19.319 TEST_HEADER include/spdk/fd_group.h 00:04:19.578 TEST_HEADER include/spdk/fd.h 00:04:19.578 TEST_HEADER include/spdk/file.h 00:04:19.578 TEST_HEADER include/spdk/fsdev.h 00:04:19.578 TEST_HEADER include/spdk/fsdev_module.h 00:04:19.578 TEST_HEADER include/spdk/ftl.h 00:04:19.578 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:19.578 TEST_HEADER include/spdk/gpt_spec.h 00:04:19.578 TEST_HEADER include/spdk/hexlify.h 00:04:19.578 CC examples/ioat/perf/perf.o 00:04:19.578 TEST_HEADER include/spdk/histogram_data.h 00:04:19.578 TEST_HEADER include/spdk/idxd.h 00:04:19.578 TEST_HEADER include/spdk/idxd_spec.h 00:04:19.578 TEST_HEADER include/spdk/init.h 00:04:19.578 TEST_HEADER include/spdk/ioat.h 00:04:19.578 TEST_HEADER include/spdk/ioat_spec.h 00:04:19.578 CC test/thread/poller_perf/poller_perf.o 00:04:19.578 TEST_HEADER include/spdk/iscsi_spec.h 00:04:19.578 TEST_HEADER include/spdk/json.h 00:04:19.578 CC examples/util/zipf/zipf.o 00:04:19.578 TEST_HEADER include/spdk/jsonrpc.h 00:04:19.578 TEST_HEADER include/spdk/keyring.h 00:04:19.578 TEST_HEADER include/spdk/keyring_module.h 00:04:19.578 TEST_HEADER include/spdk/likely.h 00:04:19.578 TEST_HEADER include/spdk/log.h 00:04:19.578 TEST_HEADER include/spdk/lvol.h 00:04:19.578 TEST_HEADER include/spdk/md5.h 00:04:19.578 TEST_HEADER include/spdk/memory.h 00:04:19.578 TEST_HEADER include/spdk/mmio.h 00:04:19.578 TEST_HEADER include/spdk/nbd.h 00:04:19.578 TEST_HEADER include/spdk/net.h 00:04:19.578 TEST_HEADER include/spdk/notify.h 00:04:19.578 TEST_HEADER include/spdk/nvme.h 00:04:19.578 TEST_HEADER include/spdk/nvme_intel.h 00:04:19.578 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:19.578 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:19.578 CC test/dma/test_dma/test_dma.o 00:04:19.578 TEST_HEADER include/spdk/nvme_spec.h 00:04:19.578 TEST_HEADER include/spdk/nvme_zns.h 00:04:19.578 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:19.578 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:19.578 TEST_HEADER include/spdk/nvmf.h 00:04:19.578 CC test/app/bdev_svc/bdev_svc.o 00:04:19.578 TEST_HEADER include/spdk/nvmf_spec.h 00:04:19.578 TEST_HEADER include/spdk/nvmf_transport.h 00:04:19.578 TEST_HEADER include/spdk/opal.h 00:04:19.578 TEST_HEADER include/spdk/opal_spec.h 00:04:19.578 TEST_HEADER include/spdk/pci_ids.h 00:04:19.578 TEST_HEADER include/spdk/pipe.h 00:04:19.578 TEST_HEADER include/spdk/queue.h 00:04:19.578 TEST_HEADER include/spdk/reduce.h 00:04:19.578 TEST_HEADER include/spdk/rpc.h 00:04:19.578 TEST_HEADER include/spdk/scheduler.h 00:04:19.578 TEST_HEADER include/spdk/scsi.h 00:04:19.578 TEST_HEADER include/spdk/scsi_spec.h 00:04:19.578 TEST_HEADER include/spdk/sock.h 00:04:19.578 TEST_HEADER include/spdk/stdinc.h 00:04:19.578 TEST_HEADER include/spdk/string.h 00:04:19.578 TEST_HEADER include/spdk/thread.h 00:04:19.578 TEST_HEADER include/spdk/trace.h 00:04:19.578 TEST_HEADER include/spdk/trace_parser.h 00:04:19.578 TEST_HEADER include/spdk/tree.h 00:04:19.578 TEST_HEADER include/spdk/ublk.h 00:04:19.578 TEST_HEADER include/spdk/util.h 00:04:19.578 TEST_HEADER include/spdk/uuid.h 00:04:19.578 TEST_HEADER include/spdk/version.h 00:04:19.578 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:19.579 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:19.579 TEST_HEADER include/spdk/vhost.h 00:04:19.579 TEST_HEADER include/spdk/vmd.h 00:04:19.579 TEST_HEADER include/spdk/xor.h 00:04:19.579 TEST_HEADER include/spdk/zipf.h 00:04:19.579 CXX test/cpp_headers/accel.o 00:04:19.579 LINK nvmf_tgt 00:04:19.579 LINK interrupt_tgt 00:04:19.579 LINK zipf 00:04:19.579 LINK poller_perf 00:04:19.579 LINK spdk_trace_record 00:04:19.579 LINK bdev_svc 00:04:19.579 LINK ioat_perf 00:04:19.838 CXX test/cpp_headers/accel_module.o 00:04:19.838 CXX test/cpp_headers/assert.o 00:04:19.838 LINK spdk_trace 00:04:19.838 CXX test/cpp_headers/barrier.o 00:04:19.838 CXX test/cpp_headers/base64.o 00:04:19.838 CXX test/cpp_headers/bdev.o 00:04:19.838 CC examples/ioat/verify/verify.o 00:04:19.838 CXX test/cpp_headers/bdev_module.o 00:04:19.838 CC test/env/vtophys/vtophys.o 00:04:20.098 LINK test_dma 00:04:20.098 CC test/app/histogram_perf/histogram_perf.o 00:04:20.098 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:20.098 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:20.098 CC test/env/mem_callbacks/mem_callbacks.o 00:04:20.098 CC examples/thread/thread/thread_ex.o 00:04:20.098 CC app/iscsi_tgt/iscsi_tgt.o 00:04:20.098 LINK vtophys 00:04:20.098 CXX test/cpp_headers/bdev_zone.o 00:04:20.098 LINK verify 00:04:20.098 LINK histogram_perf 00:04:20.098 LINK env_dpdk_post_init 00:04:20.098 CXX test/cpp_headers/bit_array.o 00:04:20.098 LINK iscsi_tgt 00:04:20.357 CXX test/cpp_headers/bit_pool.o 00:04:20.357 CXX test/cpp_headers/blob_bdev.o 00:04:20.357 LINK thread 00:04:20.357 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:20.357 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:20.357 CXX test/cpp_headers/blobfs_bdev.o 00:04:20.357 CC test/env/memory/memory_ut.o 00:04:20.357 CC test/env/pci/pci_ut.o 00:04:20.357 LINK nvme_fuzz 00:04:20.357 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:20.617 LINK mem_callbacks 00:04:20.617 CXX test/cpp_headers/blobfs.o 00:04:20.617 CC app/spdk_lspci/spdk_lspci.o 00:04:20.617 CC app/spdk_tgt/spdk_tgt.o 00:04:20.617 LINK spdk_lspci 00:04:20.617 CC examples/sock/hello_world/hello_sock.o 00:04:20.617 CC test/app/jsoncat/jsoncat.o 00:04:20.617 CXX test/cpp_headers/blob.o 00:04:20.876 CC test/event/event_perf/event_perf.o 00:04:20.876 LINK pci_ut 00:04:20.876 LINK jsoncat 00:04:20.876 LINK spdk_tgt 00:04:20.876 CXX test/cpp_headers/conf.o 00:04:20.876 CC app/spdk_nvme_perf/perf.o 00:04:20.876 LINK event_perf 00:04:20.876 LINK vhost_fuzz 00:04:20.876 LINK hello_sock 00:04:20.876 CXX test/cpp_headers/config.o 00:04:21.136 CXX test/cpp_headers/cpuset.o 00:04:21.136 CXX test/cpp_headers/crc16.o 00:04:21.136 CC app/spdk_nvme_identify/identify.o 00:04:21.136 CXX test/cpp_headers/crc32.o 00:04:21.136 CC test/event/reactor/reactor.o 00:04:21.136 CC app/spdk_nvme_discover/discovery_aer.o 00:04:21.136 CXX test/cpp_headers/crc64.o 00:04:21.136 CC examples/vmd/lsvmd/lsvmd.o 00:04:21.136 LINK reactor 00:04:21.136 CC app/spdk_top/spdk_top.o 00:04:21.395 CC app/vhost/vhost.o 00:04:21.395 CXX test/cpp_headers/dif.o 00:04:21.395 LINK lsvmd 00:04:21.395 LINK spdk_nvme_discover 00:04:21.395 CC test/event/reactor_perf/reactor_perf.o 00:04:21.395 LINK vhost 00:04:21.395 CXX test/cpp_headers/dma.o 00:04:21.395 LINK memory_ut 00:04:21.655 CC examples/vmd/led/led.o 00:04:21.655 LINK reactor_perf 00:04:21.655 CC test/event/app_repeat/app_repeat.o 00:04:21.655 CXX test/cpp_headers/endian.o 00:04:21.655 LINK led 00:04:21.655 LINK app_repeat 00:04:21.655 CC app/spdk_dd/spdk_dd.o 00:04:21.655 CXX test/cpp_headers/env_dpdk.o 00:04:21.655 LINK spdk_nvme_perf 00:04:21.914 CC test/event/scheduler/scheduler.o 00:04:21.914 CC app/fio/nvme/fio_plugin.o 00:04:21.914 LINK spdk_nvme_identify 00:04:21.914 CXX test/cpp_headers/env.o 00:04:21.914 CXX test/cpp_headers/event.o 00:04:21.914 CC app/fio/bdev/fio_plugin.o 00:04:21.914 CC examples/idxd/perf/perf.o 00:04:21.914 LINK scheduler 00:04:21.914 LINK iscsi_fuzz 00:04:21.914 CXX test/cpp_headers/fd_group.o 00:04:22.174 LINK spdk_dd 00:04:22.174 LINK spdk_top 00:04:22.174 CXX test/cpp_headers/fd.o 00:04:22.174 CXX test/cpp_headers/file.o 00:04:22.174 CC examples/accel/perf/accel_perf.o 00:04:22.174 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:22.433 CC test/app/stub/stub.o 00:04:22.433 LINK idxd_perf 00:04:22.433 CXX test/cpp_headers/fsdev.o 00:04:22.433 LINK spdk_nvme 00:04:22.433 CC examples/blob/hello_world/hello_blob.o 00:04:22.433 CC examples/nvme/hello_world/hello_world.o 00:04:22.433 LINK spdk_bdev 00:04:22.433 LINK stub 00:04:22.433 CXX test/cpp_headers/fsdev_module.o 00:04:22.433 CC test/nvme/aer/aer.o 00:04:22.433 LINK hello_fsdev 00:04:22.433 CC examples/nvme/reconnect/reconnect.o 00:04:22.433 CC test/rpc_client/rpc_client_test.o 00:04:22.692 LINK hello_blob 00:04:22.692 CXX test/cpp_headers/ftl.o 00:04:22.692 LINK hello_world 00:04:22.692 CC examples/blob/cli/blobcli.o 00:04:22.692 LINK accel_perf 00:04:22.692 LINK rpc_client_test 00:04:22.692 LINK aer 00:04:22.692 CC test/accel/dif/dif.o 00:04:22.692 CXX test/cpp_headers/fuse_dispatcher.o 00:04:22.952 CC test/blobfs/mkfs/mkfs.o 00:04:22.952 CXX test/cpp_headers/gpt_spec.o 00:04:22.952 CC examples/nvme/arbitration/arbitration.o 00:04:22.952 LINK reconnect 00:04:22.952 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:22.952 CC test/nvme/reset/reset.o 00:04:22.952 CXX test/cpp_headers/hexlify.o 00:04:22.952 LINK mkfs 00:04:22.952 CC test/lvol/esnap/esnap.o 00:04:23.211 CC examples/bdev/hello_world/hello_bdev.o 00:04:23.211 LINK blobcli 00:04:23.211 CC examples/bdev/bdevperf/bdevperf.o 00:04:23.211 CXX test/cpp_headers/histogram_data.o 00:04:23.211 LINK arbitration 00:04:23.211 CXX test/cpp_headers/idxd.o 00:04:23.211 LINK reset 00:04:23.211 LINK hello_bdev 00:04:23.470 CXX test/cpp_headers/idxd_spec.o 00:04:23.470 CXX test/cpp_headers/init.o 00:04:23.470 CC test/nvme/overhead/overhead.o 00:04:23.470 CC test/nvme/sgl/sgl.o 00:04:23.470 CC test/nvme/e2edp/nvme_dp.o 00:04:23.470 LINK nvme_manage 00:04:23.470 LINK dif 00:04:23.470 CXX test/cpp_headers/ioat.o 00:04:23.470 CXX test/cpp_headers/ioat_spec.o 00:04:23.470 CC examples/nvme/hotplug/hotplug.o 00:04:23.737 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:23.737 CXX test/cpp_headers/iscsi_spec.o 00:04:23.737 LINK sgl 00:04:23.737 LINK nvme_dp 00:04:23.737 LINK overhead 00:04:23.737 CC test/nvme/err_injection/err_injection.o 00:04:23.737 CXX test/cpp_headers/json.o 00:04:23.737 LINK hotplug 00:04:23.737 LINK cmb_copy 00:04:23.737 CC test/bdev/bdevio/bdevio.o 00:04:24.047 CC test/nvme/startup/startup.o 00:04:24.047 CC test/nvme/reserve/reserve.o 00:04:24.047 CC test/nvme/simple_copy/simple_copy.o 00:04:24.047 LINK err_injection 00:04:24.047 CXX test/cpp_headers/jsonrpc.o 00:04:24.047 LINK bdevperf 00:04:24.047 CXX test/cpp_headers/keyring.o 00:04:24.047 CC examples/nvme/abort/abort.o 00:04:24.047 LINK startup 00:04:24.047 CXX test/cpp_headers/keyring_module.o 00:04:24.047 LINK reserve 00:04:24.047 CXX test/cpp_headers/likely.o 00:04:24.047 LINK simple_copy 00:04:24.047 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:24.305 CC test/nvme/connect_stress/connect_stress.o 00:04:24.305 LINK bdevio 00:04:24.305 CXX test/cpp_headers/log.o 00:04:24.305 CC test/nvme/boot_partition/boot_partition.o 00:04:24.305 CC test/nvme/compliance/nvme_compliance.o 00:04:24.305 LINK pmr_persistence 00:04:24.305 CC test/nvme/fused_ordering/fused_ordering.o 00:04:24.305 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:24.305 LINK connect_stress 00:04:24.305 CXX test/cpp_headers/lvol.o 00:04:24.305 LINK abort 00:04:24.305 CXX test/cpp_headers/md5.o 00:04:24.306 LINK boot_partition 00:04:24.306 CXX test/cpp_headers/memory.o 00:04:24.566 LINK fused_ordering 00:04:24.566 LINK doorbell_aers 00:04:24.566 CXX test/cpp_headers/mmio.o 00:04:24.566 CC test/nvme/fdp/fdp.o 00:04:24.566 CXX test/cpp_headers/nbd.o 00:04:24.566 CC test/nvme/cuse/cuse.o 00:04:24.566 CXX test/cpp_headers/net.o 00:04:24.566 LINK nvme_compliance 00:04:24.566 CXX test/cpp_headers/notify.o 00:04:24.566 CXX test/cpp_headers/nvme.o 00:04:24.566 CXX test/cpp_headers/nvme_intel.o 00:04:24.566 CC examples/nvmf/nvmf/nvmf.o 00:04:24.566 CXX test/cpp_headers/nvme_ocssd.o 00:04:24.824 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:24.824 CXX test/cpp_headers/nvme_spec.o 00:04:24.824 CXX test/cpp_headers/nvme_zns.o 00:04:24.824 CXX test/cpp_headers/nvmf_cmd.o 00:04:24.824 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:24.824 CXX test/cpp_headers/nvmf.o 00:04:24.824 CXX test/cpp_headers/nvmf_spec.o 00:04:24.824 LINK fdp 00:04:24.824 CXX test/cpp_headers/nvmf_transport.o 00:04:24.824 CXX test/cpp_headers/opal.o 00:04:24.824 CXX test/cpp_headers/opal_spec.o 00:04:24.824 CXX test/cpp_headers/pci_ids.o 00:04:25.083 LINK nvmf 00:04:25.083 CXX test/cpp_headers/pipe.o 00:04:25.083 CXX test/cpp_headers/queue.o 00:04:25.083 CXX test/cpp_headers/reduce.o 00:04:25.083 CXX test/cpp_headers/rpc.o 00:04:25.083 CXX test/cpp_headers/scheduler.o 00:04:25.083 CXX test/cpp_headers/scsi.o 00:04:25.083 CXX test/cpp_headers/scsi_spec.o 00:04:25.083 CXX test/cpp_headers/sock.o 00:04:25.083 CXX test/cpp_headers/stdinc.o 00:04:25.083 CXX test/cpp_headers/string.o 00:04:25.083 CXX test/cpp_headers/thread.o 00:04:25.083 CXX test/cpp_headers/trace.o 00:04:25.342 CXX test/cpp_headers/trace_parser.o 00:04:25.342 CXX test/cpp_headers/tree.o 00:04:25.342 CXX test/cpp_headers/ublk.o 00:04:25.342 CXX test/cpp_headers/util.o 00:04:25.342 CXX test/cpp_headers/uuid.o 00:04:25.342 CXX test/cpp_headers/version.o 00:04:25.342 CXX test/cpp_headers/vfio_user_pci.o 00:04:25.342 CXX test/cpp_headers/vfio_user_spec.o 00:04:25.342 CXX test/cpp_headers/vhost.o 00:04:25.342 CXX test/cpp_headers/vmd.o 00:04:25.342 CXX test/cpp_headers/xor.o 00:04:25.342 CXX test/cpp_headers/zipf.o 00:04:25.911 LINK cuse 00:04:28.452 LINK esnap 00:04:28.452 00:04:28.452 real 1m15.169s 00:04:28.452 user 5m37.376s 00:04:28.452 sys 1m7.557s 00:04:28.452 04:53:39 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:28.452 04:53:39 make -- common/autotest_common.sh@10 -- $ set +x 00:04:28.452 ************************************ 00:04:28.452 END TEST make 00:04:28.452 ************************************ 00:04:28.713 04:53:39 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:28.713 04:53:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:28.713 04:53:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:28.713 04:53:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:28.713 04:53:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:28.713 04:53:39 -- pm/common@44 -- $ pid=6194 00:04:28.713 04:53:39 -- pm/common@50 -- $ kill -TERM 6194 00:04:28.713 04:53:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:28.713 04:53:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:28.713 04:53:39 -- pm/common@44 -- $ pid=6195 00:04:28.713 04:53:39 -- pm/common@50 -- $ kill -TERM 6195 00:04:28.713 04:53:39 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:28.713 04:53:39 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:28.713 04:53:39 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:28.713 04:53:39 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:28.713 04:53:39 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.713 04:53:39 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.713 04:53:39 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.713 04:53:39 -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.713 04:53:39 -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.713 04:53:39 -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.713 04:53:39 -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.713 04:53:39 -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.713 04:53:39 -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.713 04:53:39 -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.713 04:53:39 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.713 04:53:39 -- scripts/common.sh@344 -- # case "$op" in 00:04:28.713 04:53:39 -- scripts/common.sh@345 -- # : 1 00:04:28.713 04:53:39 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.713 04:53:39 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.713 04:53:39 -- scripts/common.sh@365 -- # decimal 1 00:04:28.713 04:53:39 -- scripts/common.sh@353 -- # local d=1 00:04:28.713 04:53:39 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.713 04:53:39 -- scripts/common.sh@355 -- # echo 1 00:04:28.713 04:53:39 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.713 04:53:39 -- scripts/common.sh@366 -- # decimal 2 00:04:28.713 04:53:39 -- scripts/common.sh@353 -- # local d=2 00:04:28.713 04:53:39 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.713 04:53:39 -- scripts/common.sh@355 -- # echo 2 00:04:28.713 04:53:39 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.713 04:53:39 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.713 04:53:39 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.713 04:53:39 -- scripts/common.sh@368 -- # return 0 00:04:28.713 04:53:39 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.713 04:53:39 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:28.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.713 --rc genhtml_branch_coverage=1 00:04:28.713 --rc genhtml_function_coverage=1 00:04:28.713 --rc genhtml_legend=1 00:04:28.713 --rc geninfo_all_blocks=1 00:04:28.713 --rc geninfo_unexecuted_blocks=1 00:04:28.713 00:04:28.713 ' 00:04:28.713 04:53:39 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:28.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.713 --rc genhtml_branch_coverage=1 00:04:28.713 --rc genhtml_function_coverage=1 00:04:28.713 --rc genhtml_legend=1 00:04:28.713 --rc geninfo_all_blocks=1 00:04:28.713 --rc geninfo_unexecuted_blocks=1 00:04:28.713 00:04:28.713 ' 00:04:28.713 04:53:39 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:28.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.713 --rc genhtml_branch_coverage=1 00:04:28.713 --rc genhtml_function_coverage=1 00:04:28.713 --rc genhtml_legend=1 00:04:28.713 --rc geninfo_all_blocks=1 00:04:28.713 --rc geninfo_unexecuted_blocks=1 00:04:28.713 00:04:28.713 ' 00:04:28.713 04:53:39 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:28.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.713 --rc genhtml_branch_coverage=1 00:04:28.713 --rc genhtml_function_coverage=1 00:04:28.713 --rc genhtml_legend=1 00:04:28.713 --rc geninfo_all_blocks=1 00:04:28.713 --rc geninfo_unexecuted_blocks=1 00:04:28.713 00:04:28.713 ' 00:04:28.713 04:53:39 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:28.713 04:53:39 -- nvmf/common.sh@7 -- # uname -s 00:04:28.973 04:53:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:28.973 04:53:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:28.973 04:53:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:28.973 04:53:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:28.973 04:53:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:28.973 04:53:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:28.973 04:53:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:28.973 04:53:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:28.973 04:53:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:28.973 04:53:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:28.973 04:53:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:deb23972-aa47-4ab1-8501-74d5b0817ca5 00:04:28.973 04:53:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=deb23972-aa47-4ab1-8501-74d5b0817ca5 00:04:28.973 04:53:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:28.973 04:53:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:28.973 04:53:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:28.973 04:53:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:28.973 04:53:39 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:28.973 04:53:39 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:28.973 04:53:39 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:28.973 04:53:39 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:28.973 04:53:39 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:28.973 04:53:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.973 04:53:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.973 04:53:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.973 04:53:39 -- paths/export.sh@5 -- # export PATH 00:04:28.973 04:53:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.973 04:53:39 -- nvmf/common.sh@51 -- # : 0 00:04:28.973 04:53:39 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:28.973 04:53:39 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:28.973 04:53:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:28.973 04:53:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:28.973 04:53:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:28.973 04:53:39 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:28.973 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:28.973 04:53:39 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:28.973 04:53:39 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:28.973 04:53:39 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:28.973 04:53:39 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:28.973 04:53:39 -- spdk/autotest.sh@32 -- # uname -s 00:04:28.973 04:53:39 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:28.973 04:53:39 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:28.974 04:53:39 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:28.974 04:53:39 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:28.974 04:53:39 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:28.974 04:53:39 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:28.974 04:53:39 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:28.974 04:53:39 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:28.974 04:53:39 -- spdk/autotest.sh@48 -- # udevadm_pid=66784 00:04:28.974 04:53:39 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:28.974 04:53:39 -- pm/common@17 -- # local monitor 00:04:28.974 04:53:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:28.974 04:53:39 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:28.974 04:53:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:28.974 04:53:39 -- pm/common@25 -- # sleep 1 00:04:28.974 04:53:39 -- pm/common@21 -- # date +%s 00:04:28.974 04:53:39 -- pm/common@21 -- # date +%s 00:04:28.974 04:53:39 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734152019 00:04:28.974 04:53:39 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734152019 00:04:28.974 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734152019_collect-cpu-load.pm.log 00:04:28.974 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734152019_collect-vmstat.pm.log 00:04:29.913 04:53:40 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:29.913 04:53:40 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:29.913 04:53:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:29.913 04:53:40 -- common/autotest_common.sh@10 -- # set +x 00:04:29.913 04:53:40 -- spdk/autotest.sh@59 -- # create_test_list 00:04:29.913 04:53:40 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:29.913 04:53:40 -- common/autotest_common.sh@10 -- # set +x 00:04:29.913 04:53:40 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:29.913 04:53:40 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:29.913 04:53:40 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:29.913 04:53:40 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:29.913 04:53:40 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:29.913 04:53:40 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:29.913 04:53:40 -- common/autotest_common.sh@1455 -- # uname 00:04:29.913 04:53:40 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:29.913 04:53:40 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:29.913 04:53:40 -- common/autotest_common.sh@1475 -- # uname 00:04:29.913 04:53:40 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:29.913 04:53:40 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:29.913 04:53:40 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:30.172 lcov: LCOV version 1.15 00:04:30.172 04:53:40 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:45.059 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:45.059 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:59.972 04:54:08 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:59.972 04:54:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:59.972 04:54:08 -- common/autotest_common.sh@10 -- # set +x 00:04:59.972 04:54:08 -- spdk/autotest.sh@78 -- # rm -f 00:04:59.972 04:54:08 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:59.972 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:59.972 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:59.972 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:59.972 04:54:09 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:59.972 04:54:09 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:59.972 04:54:09 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:59.972 04:54:09 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:59.972 04:54:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:59.972 04:54:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:59.972 04:54:09 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:59.972 04:54:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:59.972 04:54:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:59.972 04:54:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:59.972 04:54:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:59.972 04:54:09 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:59.972 04:54:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:59.972 04:54:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:59.972 04:54:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:59.972 04:54:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:59.972 04:54:09 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:59.972 04:54:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:59.972 04:54:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:59.972 04:54:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:59.972 04:54:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:59.972 04:54:09 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:59.972 04:54:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:59.972 04:54:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:59.972 04:54:09 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:59.972 04:54:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:59.972 04:54:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:59.972 04:54:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:59.972 04:54:09 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:59.972 04:54:09 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:59.972 No valid GPT data, bailing 00:04:59.972 04:54:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:59.972 04:54:09 -- scripts/common.sh@394 -- # pt= 00:04:59.972 04:54:09 -- scripts/common.sh@395 -- # return 1 00:04:59.972 04:54:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:59.972 1+0 records in 00:04:59.972 1+0 records out 00:04:59.972 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00630211 s, 166 MB/s 00:04:59.972 04:54:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:59.972 04:54:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:59.972 04:54:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:59.972 04:54:09 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:59.972 04:54:09 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:59.972 No valid GPT data, bailing 00:04:59.972 04:54:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:59.972 04:54:09 -- scripts/common.sh@394 -- # pt= 00:04:59.972 04:54:09 -- scripts/common.sh@395 -- # return 1 00:04:59.972 04:54:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:59.972 1+0 records in 00:04:59.972 1+0 records out 00:04:59.972 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442804 s, 237 MB/s 00:04:59.972 04:54:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:59.972 04:54:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:59.972 04:54:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:59.972 04:54:09 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:59.972 04:54:09 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:59.972 No valid GPT data, bailing 00:04:59.973 04:54:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:59.973 04:54:09 -- scripts/common.sh@394 -- # pt= 00:04:59.973 04:54:09 -- scripts/common.sh@395 -- # return 1 00:04:59.973 04:54:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:59.973 1+0 records in 00:04:59.973 1+0 records out 00:04:59.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00639066 s, 164 MB/s 00:04:59.973 04:54:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:59.973 04:54:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:59.973 04:54:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:59.973 04:54:09 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:59.973 04:54:09 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:59.973 No valid GPT data, bailing 00:04:59.973 04:54:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:59.973 04:54:09 -- scripts/common.sh@394 -- # pt= 00:04:59.973 04:54:09 -- scripts/common.sh@395 -- # return 1 00:04:59.973 04:54:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:59.973 1+0 records in 00:04:59.973 1+0 records out 00:04:59.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00634131 s, 165 MB/s 00:04:59.973 04:54:09 -- spdk/autotest.sh@105 -- # sync 00:04:59.973 04:54:10 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:59.973 04:54:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:59.973 04:54:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:02.514 04:54:12 -- spdk/autotest.sh@111 -- # uname -s 00:05:02.514 04:54:12 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:02.514 04:54:12 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:02.514 04:54:12 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:03.083 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.083 Hugepages 00:05:03.083 node hugesize free / total 00:05:03.083 node0 1048576kB 0 / 0 00:05:03.083 node0 2048kB 0 / 0 00:05:03.083 00:05:03.083 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:03.083 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:03.083 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:03.343 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:03.343 04:54:14 -- spdk/autotest.sh@117 -- # uname -s 00:05:03.343 04:54:14 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:03.343 04:54:14 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:03.343 04:54:14 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.913 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.172 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.172 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.172 04:54:15 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:05.553 04:54:16 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:05.553 04:54:16 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:05.553 04:54:16 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:05.553 04:54:16 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:05.553 04:54:16 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:05.553 04:54:16 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:05.553 04:54:16 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:05.553 04:54:16 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:05.553 04:54:16 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:05.553 04:54:16 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:05.553 04:54:16 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:05.553 04:54:16 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:05.813 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.813 Waiting for block devices as requested 00:05:05.813 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:06.073 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:06.073 04:54:16 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:06.073 04:54:16 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:06.073 04:54:16 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:06.073 04:54:16 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:06.073 04:54:16 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:06.073 04:54:16 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:06.073 04:54:16 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:06.073 04:54:16 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:06.073 04:54:16 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:06.073 04:54:16 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:06.073 04:54:16 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:06.073 04:54:16 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:06.073 04:54:16 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:06.073 04:54:16 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:06.073 04:54:16 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:06.073 04:54:16 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:06.073 04:54:16 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:06.073 04:54:16 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:06.073 04:54:16 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:06.073 04:54:16 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:06.073 04:54:16 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:06.073 04:54:16 -- common/autotest_common.sh@1541 -- # continue 00:05:06.073 04:54:16 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:06.073 04:54:16 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:06.073 04:54:16 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:06.073 04:54:16 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:06.074 04:54:16 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:06.074 04:54:16 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:06.074 04:54:16 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:06.074 04:54:16 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:06.074 04:54:16 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:06.074 04:54:16 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:06.074 04:54:16 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:06.074 04:54:16 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:06.074 04:54:16 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:06.074 04:54:16 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:06.074 04:54:16 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:06.074 04:54:16 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:06.074 04:54:16 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:06.074 04:54:16 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:06.074 04:54:16 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:06.074 04:54:16 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:06.074 04:54:16 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:06.074 04:54:16 -- common/autotest_common.sh@1541 -- # continue 00:05:06.074 04:54:16 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:06.074 04:54:16 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:06.074 04:54:16 -- common/autotest_common.sh@10 -- # set +x 00:05:06.333 04:54:16 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:06.333 04:54:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:06.333 04:54:16 -- common/autotest_common.sh@10 -- # set +x 00:05:06.333 04:54:17 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:06.903 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:07.163 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:07.163 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:07.163 04:54:18 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:07.163 04:54:18 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:07.163 04:54:18 -- common/autotest_common.sh@10 -- # set +x 00:05:07.424 04:54:18 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:07.424 04:54:18 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:07.424 04:54:18 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:07.424 04:54:18 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:07.424 04:54:18 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:07.424 04:54:18 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:07.424 04:54:18 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:07.424 04:54:18 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:07.424 04:54:18 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:07.424 04:54:18 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:07.424 04:54:18 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:07.424 04:54:18 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:07.424 04:54:18 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:07.424 04:54:18 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:07.424 04:54:18 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:07.424 04:54:18 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:07.424 04:54:18 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:07.424 04:54:18 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:07.424 04:54:18 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:07.424 04:54:18 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:07.424 04:54:18 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:07.424 04:54:18 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:07.424 04:54:18 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:07.424 04:54:18 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:07.424 04:54:18 -- common/autotest_common.sh@1570 -- # return 0 00:05:07.424 04:54:18 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:07.424 04:54:18 -- common/autotest_common.sh@1578 -- # return 0 00:05:07.424 04:54:18 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:07.424 04:54:18 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:07.424 04:54:18 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:07.424 04:54:18 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:07.424 04:54:18 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:07.424 04:54:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:07.424 04:54:18 -- common/autotest_common.sh@10 -- # set +x 00:05:07.424 04:54:18 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:07.424 04:54:18 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:07.424 04:54:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.424 04:54:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.424 04:54:18 -- common/autotest_common.sh@10 -- # set +x 00:05:07.424 ************************************ 00:05:07.424 START TEST env 00:05:07.424 ************************************ 00:05:07.424 04:54:18 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:07.684 * Looking for test storage... 00:05:07.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:07.684 04:54:18 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:07.684 04:54:18 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:07.684 04:54:18 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:07.684 04:54:18 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:07.684 04:54:18 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.684 04:54:18 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.684 04:54:18 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.684 04:54:18 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.684 04:54:18 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.684 04:54:18 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.684 04:54:18 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.684 04:54:18 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.684 04:54:18 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.684 04:54:18 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.684 04:54:18 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.684 04:54:18 env -- scripts/common.sh@344 -- # case "$op" in 00:05:07.684 04:54:18 env -- scripts/common.sh@345 -- # : 1 00:05:07.684 04:54:18 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.684 04:54:18 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.684 04:54:18 env -- scripts/common.sh@365 -- # decimal 1 00:05:07.684 04:54:18 env -- scripts/common.sh@353 -- # local d=1 00:05:07.684 04:54:18 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.684 04:54:18 env -- scripts/common.sh@355 -- # echo 1 00:05:07.684 04:54:18 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.684 04:54:18 env -- scripts/common.sh@366 -- # decimal 2 00:05:07.684 04:54:18 env -- scripts/common.sh@353 -- # local d=2 00:05:07.684 04:54:18 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.684 04:54:18 env -- scripts/common.sh@355 -- # echo 2 00:05:07.684 04:54:18 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.684 04:54:18 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.684 04:54:18 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.684 04:54:18 env -- scripts/common.sh@368 -- # return 0 00:05:07.684 04:54:18 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.684 04:54:18 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:07.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.684 --rc genhtml_branch_coverage=1 00:05:07.684 --rc genhtml_function_coverage=1 00:05:07.684 --rc genhtml_legend=1 00:05:07.684 --rc geninfo_all_blocks=1 00:05:07.684 --rc geninfo_unexecuted_blocks=1 00:05:07.684 00:05:07.684 ' 00:05:07.684 04:54:18 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:07.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.684 --rc genhtml_branch_coverage=1 00:05:07.684 --rc genhtml_function_coverage=1 00:05:07.685 --rc genhtml_legend=1 00:05:07.685 --rc geninfo_all_blocks=1 00:05:07.685 --rc geninfo_unexecuted_blocks=1 00:05:07.685 00:05:07.685 ' 00:05:07.685 04:54:18 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:07.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.685 --rc genhtml_branch_coverage=1 00:05:07.685 --rc genhtml_function_coverage=1 00:05:07.685 --rc genhtml_legend=1 00:05:07.685 --rc geninfo_all_blocks=1 00:05:07.685 --rc geninfo_unexecuted_blocks=1 00:05:07.685 00:05:07.685 ' 00:05:07.685 04:54:18 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:07.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.685 --rc genhtml_branch_coverage=1 00:05:07.685 --rc genhtml_function_coverage=1 00:05:07.685 --rc genhtml_legend=1 00:05:07.685 --rc geninfo_all_blocks=1 00:05:07.685 --rc geninfo_unexecuted_blocks=1 00:05:07.685 00:05:07.685 ' 00:05:07.685 04:54:18 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:07.685 04:54:18 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.685 04:54:18 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.685 04:54:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.685 ************************************ 00:05:07.685 START TEST env_memory 00:05:07.685 ************************************ 00:05:07.685 04:54:18 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:07.685 00:05:07.685 00:05:07.685 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.685 http://cunit.sourceforge.net/ 00:05:07.685 00:05:07.685 00:05:07.685 Suite: memory 00:05:07.685 Test: alloc and free memory map ...[2024-12-14 04:54:18.505710] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:07.685 passed 00:05:07.685 Test: mem map translation ...[2024-12-14 04:54:18.546342] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:07.685 [2024-12-14 04:54:18.546380] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:07.685 [2024-12-14 04:54:18.546449] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:07.685 [2024-12-14 04:54:18.546469] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:07.944 passed 00:05:07.944 Test: mem map registration ...[2024-12-14 04:54:18.608333] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:07.944 [2024-12-14 04:54:18.608372] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:07.944 passed 00:05:07.944 Test: mem map adjacent registrations ...passed 00:05:07.944 00:05:07.944 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.944 suites 1 1 n/a 0 0 00:05:07.944 tests 4 4 4 0 0 00:05:07.944 asserts 152 152 152 0 n/a 00:05:07.944 00:05:07.944 Elapsed time = 0.221 seconds 00:05:07.944 00:05:07.944 real 0m0.273s 00:05:07.944 user 0m0.232s 00:05:07.944 sys 0m0.029s 00:05:07.944 04:54:18 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.944 04:54:18 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:07.944 ************************************ 00:05:07.944 END TEST env_memory 00:05:07.944 ************************************ 00:05:07.944 04:54:18 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:07.944 04:54:18 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.944 04:54:18 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.944 04:54:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.944 ************************************ 00:05:07.944 START TEST env_vtophys 00:05:07.944 ************************************ 00:05:07.944 04:54:18 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:07.944 EAL: lib.eal log level changed from notice to debug 00:05:07.944 EAL: Detected lcore 0 as core 0 on socket 0 00:05:07.944 EAL: Detected lcore 1 as core 0 on socket 0 00:05:07.944 EAL: Detected lcore 2 as core 0 on socket 0 00:05:07.944 EAL: Detected lcore 3 as core 0 on socket 0 00:05:07.944 EAL: Detected lcore 4 as core 0 on socket 0 00:05:07.944 EAL: Detected lcore 5 as core 0 on socket 0 00:05:07.944 EAL: Detected lcore 6 as core 0 on socket 0 00:05:07.944 EAL: Detected lcore 7 as core 0 on socket 0 00:05:07.944 EAL: Detected lcore 8 as core 0 on socket 0 00:05:07.945 EAL: Detected lcore 9 as core 0 on socket 0 00:05:08.205 EAL: Maximum logical cores by configuration: 128 00:05:08.205 EAL: Detected CPU lcores: 10 00:05:08.205 EAL: Detected NUMA nodes: 1 00:05:08.205 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:08.205 EAL: Detected shared linkage of DPDK 00:05:08.205 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:08.205 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:08.205 EAL: Registered [vdev] bus. 00:05:08.205 EAL: bus.vdev log level changed from disabled to notice 00:05:08.205 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:08.205 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:08.205 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:08.205 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:08.205 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:08.205 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:08.205 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:08.205 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:08.205 EAL: No shared files mode enabled, IPC will be disabled 00:05:08.205 EAL: No shared files mode enabled, IPC is disabled 00:05:08.205 EAL: Selected IOVA mode 'PA' 00:05:08.205 EAL: Probing VFIO support... 00:05:08.205 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:08.205 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:08.205 EAL: Ask a virtual area of 0x2e000 bytes 00:05:08.205 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:08.205 EAL: Setting up physically contiguous memory... 00:05:08.205 EAL: Setting maximum number of open files to 524288 00:05:08.205 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:08.205 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:08.205 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.205 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:08.205 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.205 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.205 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:08.205 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:08.205 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.205 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:08.205 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.205 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.205 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:08.205 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:08.205 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.205 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:08.205 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.205 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.205 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:08.205 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:08.205 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.205 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:08.205 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.205 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.205 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:08.205 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:08.205 EAL: Hugepages will be freed exactly as allocated. 00:05:08.205 EAL: No shared files mode enabled, IPC is disabled 00:05:08.205 EAL: No shared files mode enabled, IPC is disabled 00:05:08.205 EAL: TSC frequency is ~2290000 KHz 00:05:08.205 EAL: Main lcore 0 is ready (tid=7f2cbb9cba40;cpuset=[0]) 00:05:08.205 EAL: Trying to obtain current memory policy. 00:05:08.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.205 EAL: Restoring previous memory policy: 0 00:05:08.205 EAL: request: mp_malloc_sync 00:05:08.205 EAL: No shared files mode enabled, IPC is disabled 00:05:08.205 EAL: Heap on socket 0 was expanded by 2MB 00:05:08.205 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:08.205 EAL: No shared files mode enabled, IPC is disabled 00:05:08.205 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:08.205 EAL: Mem event callback 'spdk:(nil)' registered 00:05:08.205 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:08.205 00:05:08.205 00:05:08.205 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.205 http://cunit.sourceforge.net/ 00:05:08.205 00:05:08.205 00:05:08.205 Suite: components_suite 00:05:08.465 Test: vtophys_malloc_test ...passed 00:05:08.465 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:08.465 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.465 EAL: Restoring previous memory policy: 4 00:05:08.465 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.465 EAL: request: mp_malloc_sync 00:05:08.465 EAL: No shared files mode enabled, IPC is disabled 00:05:08.465 EAL: Heap on socket 0 was expanded by 4MB 00:05:08.465 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.465 EAL: request: mp_malloc_sync 00:05:08.465 EAL: No shared files mode enabled, IPC is disabled 00:05:08.465 EAL: Heap on socket 0 was shrunk by 4MB 00:05:08.465 EAL: Trying to obtain current memory policy. 00:05:08.465 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.465 EAL: Restoring previous memory policy: 4 00:05:08.465 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.465 EAL: request: mp_malloc_sync 00:05:08.465 EAL: No shared files mode enabled, IPC is disabled 00:05:08.465 EAL: Heap on socket 0 was expanded by 6MB 00:05:08.465 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.465 EAL: request: mp_malloc_sync 00:05:08.465 EAL: No shared files mode enabled, IPC is disabled 00:05:08.465 EAL: Heap on socket 0 was shrunk by 6MB 00:05:08.465 EAL: Trying to obtain current memory policy. 00:05:08.465 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.465 EAL: Restoring previous memory policy: 4 00:05:08.465 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.465 EAL: request: mp_malloc_sync 00:05:08.465 EAL: No shared files mode enabled, IPC is disabled 00:05:08.465 EAL: Heap on socket 0 was expanded by 10MB 00:05:08.465 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.465 EAL: request: mp_malloc_sync 00:05:08.465 EAL: No shared files mode enabled, IPC is disabled 00:05:08.465 EAL: Heap on socket 0 was shrunk by 10MB 00:05:08.465 EAL: Trying to obtain current memory policy. 00:05:08.465 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.465 EAL: Restoring previous memory policy: 4 00:05:08.465 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.465 EAL: request: mp_malloc_sync 00:05:08.465 EAL: No shared files mode enabled, IPC is disabled 00:05:08.465 EAL: Heap on socket 0 was expanded by 18MB 00:05:08.465 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.465 EAL: request: mp_malloc_sync 00:05:08.465 EAL: No shared files mode enabled, IPC is disabled 00:05:08.465 EAL: Heap on socket 0 was shrunk by 18MB 00:05:08.465 EAL: Trying to obtain current memory policy. 00:05:08.465 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.465 EAL: Restoring previous memory policy: 4 00:05:08.465 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.465 EAL: request: mp_malloc_sync 00:05:08.465 EAL: No shared files mode enabled, IPC is disabled 00:05:08.465 EAL: Heap on socket 0 was expanded by 34MB 00:05:08.465 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.465 EAL: request: mp_malloc_sync 00:05:08.465 EAL: No shared files mode enabled, IPC is disabled 00:05:08.465 EAL: Heap on socket 0 was shrunk by 34MB 00:05:08.465 EAL: Trying to obtain current memory policy. 00:05:08.465 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.465 EAL: Restoring previous memory policy: 4 00:05:08.465 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.465 EAL: request: mp_malloc_sync 00:05:08.465 EAL: No shared files mode enabled, IPC is disabled 00:05:08.465 EAL: Heap on socket 0 was expanded by 66MB 00:05:08.725 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.725 EAL: request: mp_malloc_sync 00:05:08.725 EAL: No shared files mode enabled, IPC is disabled 00:05:08.725 EAL: Heap on socket 0 was shrunk by 66MB 00:05:08.725 EAL: Trying to obtain current memory policy. 00:05:08.725 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.725 EAL: Restoring previous memory policy: 4 00:05:08.725 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.725 EAL: request: mp_malloc_sync 00:05:08.725 EAL: No shared files mode enabled, IPC is disabled 00:05:08.725 EAL: Heap on socket 0 was expanded by 130MB 00:05:08.725 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.725 EAL: request: mp_malloc_sync 00:05:08.725 EAL: No shared files mode enabled, IPC is disabled 00:05:08.725 EAL: Heap on socket 0 was shrunk by 130MB 00:05:08.725 EAL: Trying to obtain current memory policy. 00:05:08.725 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.725 EAL: Restoring previous memory policy: 4 00:05:08.725 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.725 EAL: request: mp_malloc_sync 00:05:08.725 EAL: No shared files mode enabled, IPC is disabled 00:05:08.725 EAL: Heap on socket 0 was expanded by 258MB 00:05:08.725 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.725 EAL: request: mp_malloc_sync 00:05:08.725 EAL: No shared files mode enabled, IPC is disabled 00:05:08.725 EAL: Heap on socket 0 was shrunk by 258MB 00:05:08.725 EAL: Trying to obtain current memory policy. 00:05:08.725 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.984 EAL: Restoring previous memory policy: 4 00:05:08.984 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.984 EAL: request: mp_malloc_sync 00:05:08.984 EAL: No shared files mode enabled, IPC is disabled 00:05:08.984 EAL: Heap on socket 0 was expanded by 514MB 00:05:08.984 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.984 EAL: request: mp_malloc_sync 00:05:08.984 EAL: No shared files mode enabled, IPC is disabled 00:05:08.984 EAL: Heap on socket 0 was shrunk by 514MB 00:05:08.984 EAL: Trying to obtain current memory policy. 00:05:08.984 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.252 EAL: Restoring previous memory policy: 4 00:05:09.252 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.252 EAL: request: mp_malloc_sync 00:05:09.252 EAL: No shared files mode enabled, IPC is disabled 00:05:09.252 EAL: Heap on socket 0 was expanded by 1026MB 00:05:09.524 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.524 passed 00:05:09.524 00:05:09.524 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.524 suites 1 1 n/a 0 0 00:05:09.524 tests 2 2 2 0 0 00:05:09.524 asserts 5428 5428 5428 0 n/a 00:05:09.524 00:05:09.524 Elapsed time = 1.369 seconds 00:05:09.524 EAL: request: mp_malloc_sync 00:05:09.524 EAL: No shared files mode enabled, IPC is disabled 00:05:09.524 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:09.524 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.524 EAL: request: mp_malloc_sync 00:05:09.524 EAL: No shared files mode enabled, IPC is disabled 00:05:09.524 EAL: Heap on socket 0 was shrunk by 2MB 00:05:09.524 EAL: No shared files mode enabled, IPC is disabled 00:05:09.524 EAL: No shared files mode enabled, IPC is disabled 00:05:09.524 EAL: No shared files mode enabled, IPC is disabled 00:05:09.797 00:05:09.797 real 0m1.626s 00:05:09.797 user 0m0.780s 00:05:09.797 sys 0m0.716s 00:05:09.797 04:54:20 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.797 04:54:20 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:09.797 ************************************ 00:05:09.797 END TEST env_vtophys 00:05:09.797 ************************************ 00:05:09.797 04:54:20 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:09.797 04:54:20 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.797 04:54:20 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.797 04:54:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.797 ************************************ 00:05:09.797 START TEST env_pci 00:05:09.797 ************************************ 00:05:09.797 04:54:20 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:09.797 00:05:09.797 00:05:09.797 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.797 http://cunit.sourceforge.net/ 00:05:09.797 00:05:09.797 00:05:09.797 Suite: pci 00:05:09.797 Test: pci_hook ...[2024-12-14 04:54:20.507096] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69021 has claimed it 00:05:09.797 passed 00:05:09.797 00:05:09.797 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.797 suites 1 1 n/a 0 0 00:05:09.797 tests 1 1 1 0 0 00:05:09.797 asserts 25 25 25 0 n/a 00:05:09.797 00:05:09.797 Elapsed time = 0.007 seconds 00:05:09.797 EAL: Cannot find device (10000:00:01.0) 00:05:09.797 EAL: Failed to attach device on primary process 00:05:09.797 00:05:09.797 real 0m0.093s 00:05:09.797 user 0m0.038s 00:05:09.797 sys 0m0.054s 00:05:09.797 04:54:20 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.797 04:54:20 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:09.797 ************************************ 00:05:09.797 END TEST env_pci 00:05:09.797 ************************************ 00:05:09.797 04:54:20 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:09.798 04:54:20 env -- env/env.sh@15 -- # uname 00:05:09.798 04:54:20 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:09.798 04:54:20 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:09.798 04:54:20 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:09.798 04:54:20 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:09.798 04:54:20 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.798 04:54:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.798 ************************************ 00:05:09.798 START TEST env_dpdk_post_init 00:05:09.798 ************************************ 00:05:09.798 04:54:20 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:10.078 EAL: Detected CPU lcores: 10 00:05:10.078 EAL: Detected NUMA nodes: 1 00:05:10.078 EAL: Detected shared linkage of DPDK 00:05:10.078 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:10.078 EAL: Selected IOVA mode 'PA' 00:05:10.078 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:10.078 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:10.078 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:10.078 Starting DPDK initialization... 00:05:10.078 Starting SPDK post initialization... 00:05:10.078 SPDK NVMe probe 00:05:10.078 Attaching to 0000:00:10.0 00:05:10.078 Attaching to 0000:00:11.0 00:05:10.078 Attached to 0000:00:10.0 00:05:10.078 Attached to 0000:00:11.0 00:05:10.078 Cleaning up... 00:05:10.078 00:05:10.078 real 0m0.252s 00:05:10.078 user 0m0.068s 00:05:10.078 sys 0m0.084s 00:05:10.078 04:54:20 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.078 04:54:20 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:10.078 ************************************ 00:05:10.078 END TEST env_dpdk_post_init 00:05:10.078 ************************************ 00:05:10.079 04:54:20 env -- env/env.sh@26 -- # uname 00:05:10.355 04:54:20 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:10.355 04:54:20 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:10.355 04:54:20 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.355 04:54:20 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.355 04:54:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.355 ************************************ 00:05:10.355 START TEST env_mem_callbacks 00:05:10.355 ************************************ 00:05:10.355 04:54:20 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:10.355 EAL: Detected CPU lcores: 10 00:05:10.355 EAL: Detected NUMA nodes: 1 00:05:10.355 EAL: Detected shared linkage of DPDK 00:05:10.355 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:10.355 EAL: Selected IOVA mode 'PA' 00:05:10.355 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:10.355 00:05:10.355 00:05:10.355 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.355 http://cunit.sourceforge.net/ 00:05:10.355 00:05:10.355 00:05:10.355 Suite: memory 00:05:10.355 Test: test ... 00:05:10.355 register 0x200000200000 2097152 00:05:10.355 malloc 3145728 00:05:10.355 register 0x200000400000 4194304 00:05:10.355 buf 0x200000500000 len 3145728 PASSED 00:05:10.355 malloc 64 00:05:10.355 buf 0x2000004fff40 len 64 PASSED 00:05:10.355 malloc 4194304 00:05:10.355 register 0x200000800000 6291456 00:05:10.355 buf 0x200000a00000 len 4194304 PASSED 00:05:10.355 free 0x200000500000 3145728 00:05:10.355 free 0x2000004fff40 64 00:05:10.355 unregister 0x200000400000 4194304 PASSED 00:05:10.355 free 0x200000a00000 4194304 00:05:10.355 unregister 0x200000800000 6291456 PASSED 00:05:10.355 malloc 8388608 00:05:10.355 register 0x200000400000 10485760 00:05:10.355 buf 0x200000600000 len 8388608 PASSED 00:05:10.355 free 0x200000600000 8388608 00:05:10.355 unregister 0x200000400000 10485760 PASSED 00:05:10.355 passed 00:05:10.355 00:05:10.355 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.355 suites 1 1 n/a 0 0 00:05:10.355 tests 1 1 1 0 0 00:05:10.355 asserts 15 15 15 0 n/a 00:05:10.355 00:05:10.355 Elapsed time = 0.011 seconds 00:05:10.355 00:05:10.355 real 0m0.201s 00:05:10.355 user 0m0.036s 00:05:10.355 sys 0m0.064s 00:05:10.355 04:54:21 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.355 04:54:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:10.355 ************************************ 00:05:10.355 END TEST env_mem_callbacks 00:05:10.355 ************************************ 00:05:10.355 00:05:10.355 real 0m3.020s 00:05:10.355 user 0m1.377s 00:05:10.355 sys 0m1.306s 00:05:10.355 04:54:21 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.355 04:54:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.355 ************************************ 00:05:10.355 END TEST env 00:05:10.355 ************************************ 00:05:10.615 04:54:21 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:10.615 04:54:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.615 04:54:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.615 04:54:21 -- common/autotest_common.sh@10 -- # set +x 00:05:10.615 ************************************ 00:05:10.615 START TEST rpc 00:05:10.615 ************************************ 00:05:10.615 04:54:21 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:10.615 * Looking for test storage... 00:05:10.615 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:10.615 04:54:21 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:10.615 04:54:21 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:10.615 04:54:21 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:10.615 04:54:21 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:10.615 04:54:21 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.615 04:54:21 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.615 04:54:21 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.615 04:54:21 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.615 04:54:21 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.615 04:54:21 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.615 04:54:21 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.615 04:54:21 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.615 04:54:21 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.615 04:54:21 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.615 04:54:21 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.615 04:54:21 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:10.615 04:54:21 rpc -- scripts/common.sh@345 -- # : 1 00:05:10.615 04:54:21 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.615 04:54:21 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.876 04:54:21 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:10.876 04:54:21 rpc -- scripts/common.sh@353 -- # local d=1 00:05:10.876 04:54:21 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.876 04:54:21 rpc -- scripts/common.sh@355 -- # echo 1 00:05:10.876 04:54:21 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.876 04:54:21 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:10.876 04:54:21 rpc -- scripts/common.sh@353 -- # local d=2 00:05:10.876 04:54:21 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.876 04:54:21 rpc -- scripts/common.sh@355 -- # echo 2 00:05:10.876 04:54:21 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.876 04:54:21 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.876 04:54:21 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.876 04:54:21 rpc -- scripts/common.sh@368 -- # return 0 00:05:10.876 04:54:21 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.876 04:54:21 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:10.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.876 --rc genhtml_branch_coverage=1 00:05:10.876 --rc genhtml_function_coverage=1 00:05:10.876 --rc genhtml_legend=1 00:05:10.876 --rc geninfo_all_blocks=1 00:05:10.876 --rc geninfo_unexecuted_blocks=1 00:05:10.876 00:05:10.876 ' 00:05:10.876 04:54:21 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:10.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.876 --rc genhtml_branch_coverage=1 00:05:10.876 --rc genhtml_function_coverage=1 00:05:10.876 --rc genhtml_legend=1 00:05:10.876 --rc geninfo_all_blocks=1 00:05:10.876 --rc geninfo_unexecuted_blocks=1 00:05:10.876 00:05:10.876 ' 00:05:10.876 04:54:21 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:10.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.876 --rc genhtml_branch_coverage=1 00:05:10.876 --rc genhtml_function_coverage=1 00:05:10.876 --rc genhtml_legend=1 00:05:10.876 --rc geninfo_all_blocks=1 00:05:10.876 --rc geninfo_unexecuted_blocks=1 00:05:10.876 00:05:10.876 ' 00:05:10.876 04:54:21 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:10.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.876 --rc genhtml_branch_coverage=1 00:05:10.876 --rc genhtml_function_coverage=1 00:05:10.876 --rc genhtml_legend=1 00:05:10.876 --rc geninfo_all_blocks=1 00:05:10.876 --rc geninfo_unexecuted_blocks=1 00:05:10.876 00:05:10.876 ' 00:05:10.876 04:54:21 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69148 00:05:10.876 04:54:21 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:10.876 04:54:21 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.876 04:54:21 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69148 00:05:10.876 04:54:21 rpc -- common/autotest_common.sh@831 -- # '[' -z 69148 ']' 00:05:10.876 04:54:21 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.876 04:54:21 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.876 04:54:21 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.876 04:54:21 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.876 04:54:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.876 [2024-12-14 04:54:21.615488] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:10.876 [2024-12-14 04:54:21.615639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69148 ] 00:05:11.137 [2024-12-14 04:54:21.777516] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.137 [2024-12-14 04:54:21.823893] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:11.137 [2024-12-14 04:54:21.823957] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69148' to capture a snapshot of events at runtime. 00:05:11.137 [2024-12-14 04:54:21.823974] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:11.137 [2024-12-14 04:54:21.823984] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:11.137 [2024-12-14 04:54:21.823996] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69148 for offline analysis/debug. 00:05:11.137 [2024-12-14 04:54:21.824054] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.706 04:54:22 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.706 04:54:22 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:11.706 04:54:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:11.706 04:54:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:11.706 04:54:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:11.706 04:54:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:11.706 04:54:22 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.706 04:54:22 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.706 04:54:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.706 ************************************ 00:05:11.706 START TEST rpc_integrity 00:05:11.706 ************************************ 00:05:11.706 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:11.706 04:54:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:11.706 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.706 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.706 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.706 04:54:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:11.706 04:54:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:11.706 04:54:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:11.706 04:54:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:11.706 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.706 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.706 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.706 04:54:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:11.706 04:54:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:11.706 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.706 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.706 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.706 04:54:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:11.706 { 00:05:11.706 "name": "Malloc0", 00:05:11.706 "aliases": [ 00:05:11.706 "3616bf67-5f52-4269-8ca5-b69f6dd83a66" 00:05:11.706 ], 00:05:11.706 "product_name": "Malloc disk", 00:05:11.706 "block_size": 512, 00:05:11.706 "num_blocks": 16384, 00:05:11.706 "uuid": "3616bf67-5f52-4269-8ca5-b69f6dd83a66", 00:05:11.706 "assigned_rate_limits": { 00:05:11.706 "rw_ios_per_sec": 0, 00:05:11.706 "rw_mbytes_per_sec": 0, 00:05:11.706 "r_mbytes_per_sec": 0, 00:05:11.706 "w_mbytes_per_sec": 0 00:05:11.706 }, 00:05:11.706 "claimed": false, 00:05:11.706 "zoned": false, 00:05:11.706 "supported_io_types": { 00:05:11.706 "read": true, 00:05:11.706 "write": true, 00:05:11.706 "unmap": true, 00:05:11.706 "flush": true, 00:05:11.706 "reset": true, 00:05:11.706 "nvme_admin": false, 00:05:11.706 "nvme_io": false, 00:05:11.706 "nvme_io_md": false, 00:05:11.706 "write_zeroes": true, 00:05:11.706 "zcopy": true, 00:05:11.706 "get_zone_info": false, 00:05:11.706 "zone_management": false, 00:05:11.707 "zone_append": false, 00:05:11.707 "compare": false, 00:05:11.707 "compare_and_write": false, 00:05:11.707 "abort": true, 00:05:11.707 "seek_hole": false, 00:05:11.707 "seek_data": false, 00:05:11.707 "copy": true, 00:05:11.707 "nvme_iov_md": false 00:05:11.707 }, 00:05:11.707 "memory_domains": [ 00:05:11.707 { 00:05:11.707 "dma_device_id": "system", 00:05:11.707 "dma_device_type": 1 00:05:11.707 }, 00:05:11.707 { 00:05:11.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.707 "dma_device_type": 2 00:05:11.707 } 00:05:11.707 ], 00:05:11.707 "driver_specific": {} 00:05:11.707 } 00:05:11.707 ]' 00:05:11.707 04:54:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:11.707 04:54:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:11.707 04:54:22 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:11.707 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.707 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.707 [2024-12-14 04:54:22.571818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:11.707 [2024-12-14 04:54:22.571893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:11.707 [2024-12-14 04:54:22.571957] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:11.707 [2024-12-14 04:54:22.571982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:11.707 [2024-12-14 04:54:22.574310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:11.707 [2024-12-14 04:54:22.574346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:11.707 Passthru0 00:05:11.707 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.707 04:54:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:11.707 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.707 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.967 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.967 04:54:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:11.967 { 00:05:11.967 "name": "Malloc0", 00:05:11.967 "aliases": [ 00:05:11.967 "3616bf67-5f52-4269-8ca5-b69f6dd83a66" 00:05:11.967 ], 00:05:11.967 "product_name": "Malloc disk", 00:05:11.967 "block_size": 512, 00:05:11.967 "num_blocks": 16384, 00:05:11.967 "uuid": "3616bf67-5f52-4269-8ca5-b69f6dd83a66", 00:05:11.967 "assigned_rate_limits": { 00:05:11.967 "rw_ios_per_sec": 0, 00:05:11.967 "rw_mbytes_per_sec": 0, 00:05:11.967 "r_mbytes_per_sec": 0, 00:05:11.967 "w_mbytes_per_sec": 0 00:05:11.967 }, 00:05:11.967 "claimed": true, 00:05:11.967 "claim_type": "exclusive_write", 00:05:11.967 "zoned": false, 00:05:11.967 "supported_io_types": { 00:05:11.967 "read": true, 00:05:11.967 "write": true, 00:05:11.967 "unmap": true, 00:05:11.967 "flush": true, 00:05:11.967 "reset": true, 00:05:11.967 "nvme_admin": false, 00:05:11.967 "nvme_io": false, 00:05:11.967 "nvme_io_md": false, 00:05:11.967 "write_zeroes": true, 00:05:11.967 "zcopy": true, 00:05:11.967 "get_zone_info": false, 00:05:11.967 "zone_management": false, 00:05:11.967 "zone_append": false, 00:05:11.967 "compare": false, 00:05:11.967 "compare_and_write": false, 00:05:11.967 "abort": true, 00:05:11.967 "seek_hole": false, 00:05:11.967 "seek_data": false, 00:05:11.967 "copy": true, 00:05:11.967 "nvme_iov_md": false 00:05:11.967 }, 00:05:11.967 "memory_domains": [ 00:05:11.967 { 00:05:11.967 "dma_device_id": "system", 00:05:11.967 "dma_device_type": 1 00:05:11.967 }, 00:05:11.967 { 00:05:11.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.967 "dma_device_type": 2 00:05:11.967 } 00:05:11.967 ], 00:05:11.967 "driver_specific": {} 00:05:11.967 }, 00:05:11.967 { 00:05:11.967 "name": "Passthru0", 00:05:11.967 "aliases": [ 00:05:11.967 "c5df48db-56de-5dc1-83be-32079212eb00" 00:05:11.967 ], 00:05:11.967 "product_name": "passthru", 00:05:11.967 "block_size": 512, 00:05:11.967 "num_blocks": 16384, 00:05:11.967 "uuid": "c5df48db-56de-5dc1-83be-32079212eb00", 00:05:11.967 "assigned_rate_limits": { 00:05:11.967 "rw_ios_per_sec": 0, 00:05:11.967 "rw_mbytes_per_sec": 0, 00:05:11.967 "r_mbytes_per_sec": 0, 00:05:11.967 "w_mbytes_per_sec": 0 00:05:11.967 }, 00:05:11.967 "claimed": false, 00:05:11.967 "zoned": false, 00:05:11.967 "supported_io_types": { 00:05:11.967 "read": true, 00:05:11.967 "write": true, 00:05:11.967 "unmap": true, 00:05:11.967 "flush": true, 00:05:11.967 "reset": true, 00:05:11.967 "nvme_admin": false, 00:05:11.967 "nvme_io": false, 00:05:11.967 "nvme_io_md": false, 00:05:11.967 "write_zeroes": true, 00:05:11.967 "zcopy": true, 00:05:11.967 "get_zone_info": false, 00:05:11.967 "zone_management": false, 00:05:11.967 "zone_append": false, 00:05:11.967 "compare": false, 00:05:11.967 "compare_and_write": false, 00:05:11.967 "abort": true, 00:05:11.967 "seek_hole": false, 00:05:11.967 "seek_data": false, 00:05:11.967 "copy": true, 00:05:11.967 "nvme_iov_md": false 00:05:11.967 }, 00:05:11.967 "memory_domains": [ 00:05:11.967 { 00:05:11.967 "dma_device_id": "system", 00:05:11.967 "dma_device_type": 1 00:05:11.967 }, 00:05:11.967 { 00:05:11.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.967 "dma_device_type": 2 00:05:11.967 } 00:05:11.967 ], 00:05:11.967 "driver_specific": { 00:05:11.967 "passthru": { 00:05:11.967 "name": "Passthru0", 00:05:11.967 "base_bdev_name": "Malloc0" 00:05:11.967 } 00:05:11.967 } 00:05:11.967 } 00:05:11.967 ]' 00:05:11.967 04:54:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:11.967 04:54:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:11.967 04:54:22 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:11.967 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.967 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.967 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.967 04:54:22 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:11.967 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.967 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.967 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.967 04:54:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:11.967 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.967 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.967 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.967 04:54:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:11.967 04:54:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:11.967 04:54:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:11.967 00:05:11.967 real 0m0.310s 00:05:11.967 user 0m0.190s 00:05:11.967 sys 0m0.043s 00:05:11.967 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.967 04:54:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.967 ************************************ 00:05:11.967 END TEST rpc_integrity 00:05:11.967 ************************************ 00:05:11.967 04:54:22 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:11.967 04:54:22 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.967 04:54:22 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.967 04:54:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.967 ************************************ 00:05:11.967 START TEST rpc_plugins 00:05:11.967 ************************************ 00:05:11.967 04:54:22 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:11.967 04:54:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:11.967 04:54:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.967 04:54:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:11.967 04:54:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.967 04:54:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:11.967 04:54:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:11.967 04:54:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.967 04:54:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:11.967 04:54:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.967 04:54:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:11.967 { 00:05:11.967 "name": "Malloc1", 00:05:11.967 "aliases": [ 00:05:11.967 "c47c35b3-a174-4fc1-b4de-20f4d726b2b0" 00:05:11.967 ], 00:05:11.967 "product_name": "Malloc disk", 00:05:11.967 "block_size": 4096, 00:05:11.967 "num_blocks": 256, 00:05:11.967 "uuid": "c47c35b3-a174-4fc1-b4de-20f4d726b2b0", 00:05:11.967 "assigned_rate_limits": { 00:05:11.967 "rw_ios_per_sec": 0, 00:05:11.967 "rw_mbytes_per_sec": 0, 00:05:11.967 "r_mbytes_per_sec": 0, 00:05:11.967 "w_mbytes_per_sec": 0 00:05:11.967 }, 00:05:11.967 "claimed": false, 00:05:11.967 "zoned": false, 00:05:11.967 "supported_io_types": { 00:05:11.967 "read": true, 00:05:11.967 "write": true, 00:05:11.967 "unmap": true, 00:05:11.967 "flush": true, 00:05:11.967 "reset": true, 00:05:11.967 "nvme_admin": false, 00:05:11.967 "nvme_io": false, 00:05:11.967 "nvme_io_md": false, 00:05:11.967 "write_zeroes": true, 00:05:11.967 "zcopy": true, 00:05:11.967 "get_zone_info": false, 00:05:11.967 "zone_management": false, 00:05:11.967 "zone_append": false, 00:05:11.967 "compare": false, 00:05:11.967 "compare_and_write": false, 00:05:11.967 "abort": true, 00:05:11.967 "seek_hole": false, 00:05:11.967 "seek_data": false, 00:05:11.967 "copy": true, 00:05:11.967 "nvme_iov_md": false 00:05:11.967 }, 00:05:11.967 "memory_domains": [ 00:05:11.967 { 00:05:11.967 "dma_device_id": "system", 00:05:11.967 "dma_device_type": 1 00:05:11.967 }, 00:05:11.967 { 00:05:11.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.968 "dma_device_type": 2 00:05:11.968 } 00:05:11.968 ], 00:05:11.968 "driver_specific": {} 00:05:11.968 } 00:05:11.968 ]' 00:05:11.968 04:54:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:12.227 04:54:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:12.227 04:54:22 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:12.227 04:54:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.227 04:54:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.227 04:54:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.227 04:54:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:12.227 04:54:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.227 04:54:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.227 04:54:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.227 04:54:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:12.227 04:54:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:12.227 ************************************ 00:05:12.227 END TEST rpc_plugins 00:05:12.227 ************************************ 00:05:12.227 04:54:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:12.227 00:05:12.227 real 0m0.164s 00:05:12.227 user 0m0.095s 00:05:12.227 sys 0m0.029s 00:05:12.227 04:54:22 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.227 04:54:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.227 04:54:22 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:12.227 04:54:22 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.227 04:54:22 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.227 04:54:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.227 ************************************ 00:05:12.227 START TEST rpc_trace_cmd_test 00:05:12.227 ************************************ 00:05:12.228 04:54:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:12.228 04:54:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:12.228 04:54:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:12.228 04:54:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.228 04:54:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:12.228 04:54:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.228 04:54:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:12.228 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69148", 00:05:12.228 "tpoint_group_mask": "0x8", 00:05:12.228 "iscsi_conn": { 00:05:12.228 "mask": "0x2", 00:05:12.228 "tpoint_mask": "0x0" 00:05:12.228 }, 00:05:12.228 "scsi": { 00:05:12.228 "mask": "0x4", 00:05:12.228 "tpoint_mask": "0x0" 00:05:12.228 }, 00:05:12.228 "bdev": { 00:05:12.228 "mask": "0x8", 00:05:12.228 "tpoint_mask": "0xffffffffffffffff" 00:05:12.228 }, 00:05:12.228 "nvmf_rdma": { 00:05:12.228 "mask": "0x10", 00:05:12.228 "tpoint_mask": "0x0" 00:05:12.228 }, 00:05:12.228 "nvmf_tcp": { 00:05:12.228 "mask": "0x20", 00:05:12.228 "tpoint_mask": "0x0" 00:05:12.228 }, 00:05:12.228 "ftl": { 00:05:12.228 "mask": "0x40", 00:05:12.228 "tpoint_mask": "0x0" 00:05:12.228 }, 00:05:12.228 "blobfs": { 00:05:12.228 "mask": "0x80", 00:05:12.228 "tpoint_mask": "0x0" 00:05:12.228 }, 00:05:12.228 "dsa": { 00:05:12.228 "mask": "0x200", 00:05:12.228 "tpoint_mask": "0x0" 00:05:12.228 }, 00:05:12.228 "thread": { 00:05:12.228 "mask": "0x400", 00:05:12.228 "tpoint_mask": "0x0" 00:05:12.228 }, 00:05:12.228 "nvme_pcie": { 00:05:12.228 "mask": "0x800", 00:05:12.228 "tpoint_mask": "0x0" 00:05:12.228 }, 00:05:12.228 "iaa": { 00:05:12.228 "mask": "0x1000", 00:05:12.228 "tpoint_mask": "0x0" 00:05:12.228 }, 00:05:12.228 "nvme_tcp": { 00:05:12.228 "mask": "0x2000", 00:05:12.228 "tpoint_mask": "0x0" 00:05:12.228 }, 00:05:12.228 "bdev_nvme": { 00:05:12.228 "mask": "0x4000", 00:05:12.228 "tpoint_mask": "0x0" 00:05:12.228 }, 00:05:12.228 "sock": { 00:05:12.228 "mask": "0x8000", 00:05:12.228 "tpoint_mask": "0x0" 00:05:12.228 }, 00:05:12.228 "blob": { 00:05:12.228 "mask": "0x10000", 00:05:12.228 "tpoint_mask": "0x0" 00:05:12.228 }, 00:05:12.228 "bdev_raid": { 00:05:12.228 "mask": "0x20000", 00:05:12.228 "tpoint_mask": "0x0" 00:05:12.228 } 00:05:12.228 }' 00:05:12.228 04:54:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:12.228 04:54:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:12.228 04:54:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:12.228 04:54:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:12.228 04:54:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:12.488 04:54:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:12.488 04:54:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:12.488 04:54:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:12.488 04:54:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:12.488 ************************************ 00:05:12.488 END TEST rpc_trace_cmd_test 00:05:12.488 ************************************ 00:05:12.488 04:54:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:12.488 00:05:12.488 real 0m0.236s 00:05:12.488 user 0m0.189s 00:05:12.488 sys 0m0.034s 00:05:12.488 04:54:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.488 04:54:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:12.488 04:54:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:12.488 04:54:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:12.488 04:54:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:12.488 04:54:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.488 04:54:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.488 04:54:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.488 ************************************ 00:05:12.488 START TEST rpc_daemon_integrity 00:05:12.488 ************************************ 00:05:12.488 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:12.488 04:54:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:12.488 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.488 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.488 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.488 04:54:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:12.488 04:54:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:12.488 04:54:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:12.488 04:54:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:12.488 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.488 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.747 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.747 04:54:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:12.747 04:54:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:12.747 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.747 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.747 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.747 04:54:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:12.747 { 00:05:12.747 "name": "Malloc2", 00:05:12.747 "aliases": [ 00:05:12.747 "3ec73262-a414-48e7-8015-c3b351ad92aa" 00:05:12.747 ], 00:05:12.747 "product_name": "Malloc disk", 00:05:12.747 "block_size": 512, 00:05:12.747 "num_blocks": 16384, 00:05:12.747 "uuid": "3ec73262-a414-48e7-8015-c3b351ad92aa", 00:05:12.747 "assigned_rate_limits": { 00:05:12.747 "rw_ios_per_sec": 0, 00:05:12.747 "rw_mbytes_per_sec": 0, 00:05:12.747 "r_mbytes_per_sec": 0, 00:05:12.747 "w_mbytes_per_sec": 0 00:05:12.747 }, 00:05:12.747 "claimed": false, 00:05:12.747 "zoned": false, 00:05:12.747 "supported_io_types": { 00:05:12.747 "read": true, 00:05:12.747 "write": true, 00:05:12.747 "unmap": true, 00:05:12.747 "flush": true, 00:05:12.747 "reset": true, 00:05:12.747 "nvme_admin": false, 00:05:12.747 "nvme_io": false, 00:05:12.747 "nvme_io_md": false, 00:05:12.747 "write_zeroes": true, 00:05:12.747 "zcopy": true, 00:05:12.747 "get_zone_info": false, 00:05:12.747 "zone_management": false, 00:05:12.747 "zone_append": false, 00:05:12.747 "compare": false, 00:05:12.747 "compare_and_write": false, 00:05:12.747 "abort": true, 00:05:12.747 "seek_hole": false, 00:05:12.747 "seek_data": false, 00:05:12.747 "copy": true, 00:05:12.747 "nvme_iov_md": false 00:05:12.747 }, 00:05:12.747 "memory_domains": [ 00:05:12.747 { 00:05:12.747 "dma_device_id": "system", 00:05:12.747 "dma_device_type": 1 00:05:12.747 }, 00:05:12.747 { 00:05:12.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.747 "dma_device_type": 2 00:05:12.747 } 00:05:12.747 ], 00:05:12.747 "driver_specific": {} 00:05:12.747 } 00:05:12.747 ]' 00:05:12.747 04:54:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:12.747 04:54:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:12.747 04:54:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.748 [2024-12-14 04:54:23.447253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:12.748 [2024-12-14 04:54:23.447306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:12.748 [2024-12-14 04:54:23.447336] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:12.748 [2024-12-14 04:54:23.447348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:12.748 [2024-12-14 04:54:23.449603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:12.748 [2024-12-14 04:54:23.449639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:12.748 Passthru0 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:12.748 { 00:05:12.748 "name": "Malloc2", 00:05:12.748 "aliases": [ 00:05:12.748 "3ec73262-a414-48e7-8015-c3b351ad92aa" 00:05:12.748 ], 00:05:12.748 "product_name": "Malloc disk", 00:05:12.748 "block_size": 512, 00:05:12.748 "num_blocks": 16384, 00:05:12.748 "uuid": "3ec73262-a414-48e7-8015-c3b351ad92aa", 00:05:12.748 "assigned_rate_limits": { 00:05:12.748 "rw_ios_per_sec": 0, 00:05:12.748 "rw_mbytes_per_sec": 0, 00:05:12.748 "r_mbytes_per_sec": 0, 00:05:12.748 "w_mbytes_per_sec": 0 00:05:12.748 }, 00:05:12.748 "claimed": true, 00:05:12.748 "claim_type": "exclusive_write", 00:05:12.748 "zoned": false, 00:05:12.748 "supported_io_types": { 00:05:12.748 "read": true, 00:05:12.748 "write": true, 00:05:12.748 "unmap": true, 00:05:12.748 "flush": true, 00:05:12.748 "reset": true, 00:05:12.748 "nvme_admin": false, 00:05:12.748 "nvme_io": false, 00:05:12.748 "nvme_io_md": false, 00:05:12.748 "write_zeroes": true, 00:05:12.748 "zcopy": true, 00:05:12.748 "get_zone_info": false, 00:05:12.748 "zone_management": false, 00:05:12.748 "zone_append": false, 00:05:12.748 "compare": false, 00:05:12.748 "compare_and_write": false, 00:05:12.748 "abort": true, 00:05:12.748 "seek_hole": false, 00:05:12.748 "seek_data": false, 00:05:12.748 "copy": true, 00:05:12.748 "nvme_iov_md": false 00:05:12.748 }, 00:05:12.748 "memory_domains": [ 00:05:12.748 { 00:05:12.748 "dma_device_id": "system", 00:05:12.748 "dma_device_type": 1 00:05:12.748 }, 00:05:12.748 { 00:05:12.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.748 "dma_device_type": 2 00:05:12.748 } 00:05:12.748 ], 00:05:12.748 "driver_specific": {} 00:05:12.748 }, 00:05:12.748 { 00:05:12.748 "name": "Passthru0", 00:05:12.748 "aliases": [ 00:05:12.748 "0e8669d9-e450-5159-a05d-92132bb53d81" 00:05:12.748 ], 00:05:12.748 "product_name": "passthru", 00:05:12.748 "block_size": 512, 00:05:12.748 "num_blocks": 16384, 00:05:12.748 "uuid": "0e8669d9-e450-5159-a05d-92132bb53d81", 00:05:12.748 "assigned_rate_limits": { 00:05:12.748 "rw_ios_per_sec": 0, 00:05:12.748 "rw_mbytes_per_sec": 0, 00:05:12.748 "r_mbytes_per_sec": 0, 00:05:12.748 "w_mbytes_per_sec": 0 00:05:12.748 }, 00:05:12.748 "claimed": false, 00:05:12.748 "zoned": false, 00:05:12.748 "supported_io_types": { 00:05:12.748 "read": true, 00:05:12.748 "write": true, 00:05:12.748 "unmap": true, 00:05:12.748 "flush": true, 00:05:12.748 "reset": true, 00:05:12.748 "nvme_admin": false, 00:05:12.748 "nvme_io": false, 00:05:12.748 "nvme_io_md": false, 00:05:12.748 "write_zeroes": true, 00:05:12.748 "zcopy": true, 00:05:12.748 "get_zone_info": false, 00:05:12.748 "zone_management": false, 00:05:12.748 "zone_append": false, 00:05:12.748 "compare": false, 00:05:12.748 "compare_and_write": false, 00:05:12.748 "abort": true, 00:05:12.748 "seek_hole": false, 00:05:12.748 "seek_data": false, 00:05:12.748 "copy": true, 00:05:12.748 "nvme_iov_md": false 00:05:12.748 }, 00:05:12.748 "memory_domains": [ 00:05:12.748 { 00:05:12.748 "dma_device_id": "system", 00:05:12.748 "dma_device_type": 1 00:05:12.748 }, 00:05:12.748 { 00:05:12.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.748 "dma_device_type": 2 00:05:12.748 } 00:05:12.748 ], 00:05:12.748 "driver_specific": { 00:05:12.748 "passthru": { 00:05:12.748 "name": "Passthru0", 00:05:12.748 "base_bdev_name": "Malloc2" 00:05:12.748 } 00:05:12.748 } 00:05:12.748 } 00:05:12.748 ]' 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:12.748 ************************************ 00:05:12.748 END TEST rpc_daemon_integrity 00:05:12.748 ************************************ 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:12.748 00:05:12.748 real 0m0.317s 00:05:12.748 user 0m0.199s 00:05:12.748 sys 0m0.047s 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.748 04:54:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.008 04:54:23 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:13.008 04:54:23 rpc -- rpc/rpc.sh@84 -- # killprocess 69148 00:05:13.008 04:54:23 rpc -- common/autotest_common.sh@950 -- # '[' -z 69148 ']' 00:05:13.008 04:54:23 rpc -- common/autotest_common.sh@954 -- # kill -0 69148 00:05:13.008 04:54:23 rpc -- common/autotest_common.sh@955 -- # uname 00:05:13.008 04:54:23 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.008 04:54:23 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69148 00:05:13.008 killing process with pid 69148 00:05:13.008 04:54:23 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.008 04:54:23 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.008 04:54:23 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69148' 00:05:13.008 04:54:23 rpc -- common/autotest_common.sh@969 -- # kill 69148 00:05:13.008 04:54:23 rpc -- common/autotest_common.sh@974 -- # wait 69148 00:05:13.268 00:05:13.268 real 0m2.805s 00:05:13.268 user 0m3.339s 00:05:13.268 sys 0m0.839s 00:05:13.268 ************************************ 00:05:13.268 04:54:24 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.268 04:54:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.268 END TEST rpc 00:05:13.268 ************************************ 00:05:13.528 04:54:24 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:13.528 04:54:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.528 04:54:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.528 04:54:24 -- common/autotest_common.sh@10 -- # set +x 00:05:13.528 ************************************ 00:05:13.528 START TEST skip_rpc 00:05:13.528 ************************************ 00:05:13.528 04:54:24 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:13.528 * Looking for test storage... 00:05:13.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:13.528 04:54:24 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:13.528 04:54:24 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:13.528 04:54:24 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:13.528 04:54:24 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.528 04:54:24 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:13.528 04:54:24 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.528 04:54:24 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:13.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.528 --rc genhtml_branch_coverage=1 00:05:13.528 --rc genhtml_function_coverage=1 00:05:13.528 --rc genhtml_legend=1 00:05:13.528 --rc geninfo_all_blocks=1 00:05:13.528 --rc geninfo_unexecuted_blocks=1 00:05:13.528 00:05:13.528 ' 00:05:13.528 04:54:24 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:13.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.528 --rc genhtml_branch_coverage=1 00:05:13.528 --rc genhtml_function_coverage=1 00:05:13.528 --rc genhtml_legend=1 00:05:13.528 --rc geninfo_all_blocks=1 00:05:13.528 --rc geninfo_unexecuted_blocks=1 00:05:13.528 00:05:13.528 ' 00:05:13.528 04:54:24 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:13.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.528 --rc genhtml_branch_coverage=1 00:05:13.528 --rc genhtml_function_coverage=1 00:05:13.528 --rc genhtml_legend=1 00:05:13.528 --rc geninfo_all_blocks=1 00:05:13.528 --rc geninfo_unexecuted_blocks=1 00:05:13.528 00:05:13.528 ' 00:05:13.528 04:54:24 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:13.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.528 --rc genhtml_branch_coverage=1 00:05:13.528 --rc genhtml_function_coverage=1 00:05:13.528 --rc genhtml_legend=1 00:05:13.528 --rc geninfo_all_blocks=1 00:05:13.528 --rc geninfo_unexecuted_blocks=1 00:05:13.528 00:05:13.528 ' 00:05:13.528 04:54:24 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:13.528 04:54:24 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:13.528 04:54:24 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:13.529 04:54:24 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.529 04:54:24 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.529 04:54:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.529 ************************************ 00:05:13.529 START TEST skip_rpc 00:05:13.529 ************************************ 00:05:13.529 04:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:13.529 04:54:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69351 00:05:13.529 04:54:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:13.529 04:54:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.529 04:54:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:13.788 [2024-12-14 04:54:24.493027] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:13.788 [2024-12-14 04:54:24.493236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69351 ] 00:05:13.788 [2024-12-14 04:54:24.651596] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.048 [2024-12-14 04:54:24.699147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69351 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69351 ']' 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69351 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69351 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69351' 00:05:19.326 killing process with pid 69351 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69351 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69351 00:05:19.326 00:05:19.326 real 0m5.452s 00:05:19.326 user 0m5.025s 00:05:19.326 sys 0m0.347s 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.326 04:54:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.326 ************************************ 00:05:19.326 END TEST skip_rpc 00:05:19.326 ************************************ 00:05:19.326 04:54:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:19.326 04:54:29 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.326 04:54:29 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.326 04:54:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.326 ************************************ 00:05:19.326 START TEST skip_rpc_with_json 00:05:19.326 ************************************ 00:05:19.326 04:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:19.326 04:54:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:19.326 04:54:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69438 00:05:19.326 04:54:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.326 04:54:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.326 04:54:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69438 00:05:19.326 04:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69438 ']' 00:05:19.326 04:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.326 04:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.326 04:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.326 04:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.326 04:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.326 [2024-12-14 04:54:30.012473] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:19.326 [2024-12-14 04:54:30.012703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69438 ] 00:05:19.326 [2024-12-14 04:54:30.171643] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.586 [2024-12-14 04:54:30.218758] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.156 04:54:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.156 04:54:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:20.156 04:54:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:20.156 04:54:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.156 04:54:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.156 [2024-12-14 04:54:30.821288] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:20.156 request: 00:05:20.156 { 00:05:20.156 "trtype": "tcp", 00:05:20.156 "method": "nvmf_get_transports", 00:05:20.156 "req_id": 1 00:05:20.156 } 00:05:20.156 Got JSON-RPC error response 00:05:20.156 response: 00:05:20.156 { 00:05:20.156 "code": -19, 00:05:20.156 "message": "No such device" 00:05:20.156 } 00:05:20.156 04:54:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:20.156 04:54:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:20.156 04:54:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.156 04:54:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.156 [2024-12-14 04:54:30.833380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.156 04:54:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.156 04:54:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:20.156 04:54:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.156 04:54:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.156 04:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.156 04:54:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:20.156 { 00:05:20.156 "subsystems": [ 00:05:20.156 { 00:05:20.156 "subsystem": "fsdev", 00:05:20.156 "config": [ 00:05:20.156 { 00:05:20.156 "method": "fsdev_set_opts", 00:05:20.156 "params": { 00:05:20.156 "fsdev_io_pool_size": 65535, 00:05:20.156 "fsdev_io_cache_size": 256 00:05:20.156 } 00:05:20.156 } 00:05:20.156 ] 00:05:20.156 }, 00:05:20.156 { 00:05:20.156 "subsystem": "keyring", 00:05:20.156 "config": [] 00:05:20.156 }, 00:05:20.156 { 00:05:20.156 "subsystem": "iobuf", 00:05:20.156 "config": [ 00:05:20.156 { 00:05:20.156 "method": "iobuf_set_options", 00:05:20.156 "params": { 00:05:20.156 "small_pool_count": 8192, 00:05:20.156 "large_pool_count": 1024, 00:05:20.156 "small_bufsize": 8192, 00:05:20.156 "large_bufsize": 135168 00:05:20.156 } 00:05:20.156 } 00:05:20.156 ] 00:05:20.156 }, 00:05:20.156 { 00:05:20.156 "subsystem": "sock", 00:05:20.156 "config": [ 00:05:20.156 { 00:05:20.156 "method": "sock_set_default_impl", 00:05:20.156 "params": { 00:05:20.156 "impl_name": "posix" 00:05:20.156 } 00:05:20.156 }, 00:05:20.156 { 00:05:20.156 "method": "sock_impl_set_options", 00:05:20.156 "params": { 00:05:20.156 "impl_name": "ssl", 00:05:20.156 "recv_buf_size": 4096, 00:05:20.156 "send_buf_size": 4096, 00:05:20.156 "enable_recv_pipe": true, 00:05:20.156 "enable_quickack": false, 00:05:20.156 "enable_placement_id": 0, 00:05:20.156 "enable_zerocopy_send_server": true, 00:05:20.156 "enable_zerocopy_send_client": false, 00:05:20.156 "zerocopy_threshold": 0, 00:05:20.156 "tls_version": 0, 00:05:20.156 "enable_ktls": false 00:05:20.156 } 00:05:20.156 }, 00:05:20.156 { 00:05:20.156 "method": "sock_impl_set_options", 00:05:20.156 "params": { 00:05:20.156 "impl_name": "posix", 00:05:20.156 "recv_buf_size": 2097152, 00:05:20.156 "send_buf_size": 2097152, 00:05:20.156 "enable_recv_pipe": true, 00:05:20.156 "enable_quickack": false, 00:05:20.156 "enable_placement_id": 0, 00:05:20.156 "enable_zerocopy_send_server": true, 00:05:20.156 "enable_zerocopy_send_client": false, 00:05:20.156 "zerocopy_threshold": 0, 00:05:20.156 "tls_version": 0, 00:05:20.156 "enable_ktls": false 00:05:20.157 } 00:05:20.157 } 00:05:20.157 ] 00:05:20.157 }, 00:05:20.157 { 00:05:20.157 "subsystem": "vmd", 00:05:20.157 "config": [] 00:05:20.157 }, 00:05:20.157 { 00:05:20.157 "subsystem": "accel", 00:05:20.157 "config": [ 00:05:20.157 { 00:05:20.157 "method": "accel_set_options", 00:05:20.157 "params": { 00:05:20.157 "small_cache_size": 128, 00:05:20.157 "large_cache_size": 16, 00:05:20.157 "task_count": 2048, 00:05:20.157 "sequence_count": 2048, 00:05:20.157 "buf_count": 2048 00:05:20.157 } 00:05:20.157 } 00:05:20.157 ] 00:05:20.157 }, 00:05:20.157 { 00:05:20.157 "subsystem": "bdev", 00:05:20.157 "config": [ 00:05:20.157 { 00:05:20.157 "method": "bdev_set_options", 00:05:20.157 "params": { 00:05:20.157 "bdev_io_pool_size": 65535, 00:05:20.157 "bdev_io_cache_size": 256, 00:05:20.157 "bdev_auto_examine": true, 00:05:20.157 "iobuf_small_cache_size": 128, 00:05:20.157 "iobuf_large_cache_size": 16 00:05:20.157 } 00:05:20.157 }, 00:05:20.157 { 00:05:20.157 "method": "bdev_raid_set_options", 00:05:20.157 "params": { 00:05:20.157 "process_window_size_kb": 1024, 00:05:20.157 "process_max_bandwidth_mb_sec": 0 00:05:20.157 } 00:05:20.157 }, 00:05:20.157 { 00:05:20.157 "method": "bdev_iscsi_set_options", 00:05:20.157 "params": { 00:05:20.157 "timeout_sec": 30 00:05:20.157 } 00:05:20.157 }, 00:05:20.157 { 00:05:20.157 "method": "bdev_nvme_set_options", 00:05:20.157 "params": { 00:05:20.157 "action_on_timeout": "none", 00:05:20.157 "timeout_us": 0, 00:05:20.157 "timeout_admin_us": 0, 00:05:20.157 "keep_alive_timeout_ms": 10000, 00:05:20.157 "arbitration_burst": 0, 00:05:20.157 "low_priority_weight": 0, 00:05:20.157 "medium_priority_weight": 0, 00:05:20.157 "high_priority_weight": 0, 00:05:20.157 "nvme_adminq_poll_period_us": 10000, 00:05:20.157 "nvme_ioq_poll_period_us": 0, 00:05:20.157 "io_queue_requests": 0, 00:05:20.157 "delay_cmd_submit": true, 00:05:20.157 "transport_retry_count": 4, 00:05:20.157 "bdev_retry_count": 3, 00:05:20.157 "transport_ack_timeout": 0, 00:05:20.157 "ctrlr_loss_timeout_sec": 0, 00:05:20.157 "reconnect_delay_sec": 0, 00:05:20.157 "fast_io_fail_timeout_sec": 0, 00:05:20.157 "disable_auto_failback": false, 00:05:20.157 "generate_uuids": false, 00:05:20.157 "transport_tos": 0, 00:05:20.157 "nvme_error_stat": false, 00:05:20.157 "rdma_srq_size": 0, 00:05:20.157 "io_path_stat": false, 00:05:20.157 "allow_accel_sequence": false, 00:05:20.157 "rdma_max_cq_size": 0, 00:05:20.157 "rdma_cm_event_timeout_ms": 0, 00:05:20.157 "dhchap_digests": [ 00:05:20.157 "sha256", 00:05:20.157 "sha384", 00:05:20.157 "sha512" 00:05:20.157 ], 00:05:20.157 "dhchap_dhgroups": [ 00:05:20.157 "null", 00:05:20.157 "ffdhe2048", 00:05:20.157 "ffdhe3072", 00:05:20.157 "ffdhe4096", 00:05:20.157 "ffdhe6144", 00:05:20.157 "ffdhe8192" 00:05:20.157 ] 00:05:20.157 } 00:05:20.157 }, 00:05:20.157 { 00:05:20.157 "method": "bdev_nvme_set_hotplug", 00:05:20.157 "params": { 00:05:20.157 "period_us": 100000, 00:05:20.157 "enable": false 00:05:20.157 } 00:05:20.157 }, 00:05:20.157 { 00:05:20.157 "method": "bdev_wait_for_examine" 00:05:20.157 } 00:05:20.157 ] 00:05:20.157 }, 00:05:20.157 { 00:05:20.157 "subsystem": "scsi", 00:05:20.157 "config": null 00:05:20.157 }, 00:05:20.157 { 00:05:20.157 "subsystem": "scheduler", 00:05:20.157 "config": [ 00:05:20.157 { 00:05:20.157 "method": "framework_set_scheduler", 00:05:20.157 "params": { 00:05:20.157 "name": "static" 00:05:20.157 } 00:05:20.157 } 00:05:20.157 ] 00:05:20.157 }, 00:05:20.157 { 00:05:20.157 "subsystem": "vhost_scsi", 00:05:20.157 "config": [] 00:05:20.157 }, 00:05:20.157 { 00:05:20.157 "subsystem": "vhost_blk", 00:05:20.157 "config": [] 00:05:20.157 }, 00:05:20.157 { 00:05:20.157 "subsystem": "ublk", 00:05:20.157 "config": [] 00:05:20.157 }, 00:05:20.157 { 00:05:20.157 "subsystem": "nbd", 00:05:20.157 "config": [] 00:05:20.157 }, 00:05:20.157 { 00:05:20.157 "subsystem": "nvmf", 00:05:20.157 "config": [ 00:05:20.157 { 00:05:20.157 "method": "nvmf_set_config", 00:05:20.157 "params": { 00:05:20.157 "discovery_filter": "match_any", 00:05:20.157 "admin_cmd_passthru": { 00:05:20.157 "identify_ctrlr": false 00:05:20.157 }, 00:05:20.157 "dhchap_digests": [ 00:05:20.157 "sha256", 00:05:20.157 "sha384", 00:05:20.157 "sha512" 00:05:20.157 ], 00:05:20.157 "dhchap_dhgroups": [ 00:05:20.157 "null", 00:05:20.157 "ffdhe2048", 00:05:20.157 "ffdhe3072", 00:05:20.157 "ffdhe4096", 00:05:20.157 "ffdhe6144", 00:05:20.157 "ffdhe8192" 00:05:20.157 ] 00:05:20.157 } 00:05:20.157 }, 00:05:20.157 { 00:05:20.157 "method": "nvmf_set_max_subsystems", 00:05:20.157 "params": { 00:05:20.157 "max_subsystems": 1024 00:05:20.157 } 00:05:20.157 }, 00:05:20.157 { 00:05:20.157 "method": "nvmf_set_crdt", 00:05:20.157 "params": { 00:05:20.157 "crdt1": 0, 00:05:20.157 "crdt2": 0, 00:05:20.157 "crdt3": 0 00:05:20.157 } 00:05:20.157 }, 00:05:20.157 { 00:05:20.157 "method": "nvmf_create_transport", 00:05:20.157 "params": { 00:05:20.157 "trtype": "TCP", 00:05:20.157 "max_queue_depth": 128, 00:05:20.157 "max_io_qpairs_per_ctrlr": 127, 00:05:20.157 "in_capsule_data_size": 4096, 00:05:20.157 "max_io_size": 131072, 00:05:20.157 "io_unit_size": 131072, 00:05:20.157 "max_aq_depth": 128, 00:05:20.157 "num_shared_buffers": 511, 00:05:20.157 "buf_cache_size": 4294967295, 00:05:20.157 "dif_insert_or_strip": false, 00:05:20.157 "zcopy": false, 00:05:20.157 "c2h_success": true, 00:05:20.157 "sock_priority": 0, 00:05:20.157 "abort_timeout_sec": 1, 00:05:20.157 "ack_timeout": 0, 00:05:20.157 "data_wr_pool_size": 0 00:05:20.157 } 00:05:20.157 } 00:05:20.157 ] 00:05:20.157 }, 00:05:20.157 { 00:05:20.157 "subsystem": "iscsi", 00:05:20.157 "config": [ 00:05:20.157 { 00:05:20.157 "method": "iscsi_set_options", 00:05:20.157 "params": { 00:05:20.157 "node_base": "iqn.2016-06.io.spdk", 00:05:20.157 "max_sessions": 128, 00:05:20.157 "max_connections_per_session": 2, 00:05:20.157 "max_queue_depth": 64, 00:05:20.157 "default_time2wait": 2, 00:05:20.157 "default_time2retain": 20, 00:05:20.157 "first_burst_length": 8192, 00:05:20.157 "immediate_data": true, 00:05:20.157 "allow_duplicated_isid": false, 00:05:20.157 "error_recovery_level": 0, 00:05:20.157 "nop_timeout": 60, 00:05:20.157 "nop_in_interval": 30, 00:05:20.157 "disable_chap": false, 00:05:20.157 "require_chap": false, 00:05:20.157 "mutual_chap": false, 00:05:20.157 "chap_group": 0, 00:05:20.157 "max_large_datain_per_connection": 64, 00:05:20.157 "max_r2t_per_connection": 4, 00:05:20.157 "pdu_pool_size": 36864, 00:05:20.157 "immediate_data_pool_size": 16384, 00:05:20.157 "data_out_pool_size": 2048 00:05:20.157 } 00:05:20.157 } 00:05:20.157 ] 00:05:20.157 } 00:05:20.157 ] 00:05:20.157 } 00:05:20.157 04:54:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:20.157 04:54:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69438 00:05:20.157 04:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69438 ']' 00:05:20.157 04:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69438 00:05:20.157 04:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:20.157 04:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.157 04:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69438 00:05:20.417 killing process with pid 69438 00:05:20.417 04:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.417 04:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.417 04:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69438' 00:05:20.417 04:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69438 00:05:20.417 04:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69438 00:05:20.676 04:54:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69467 00:05:20.676 04:54:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:20.676 04:54:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:25.953 04:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69467 00:05:25.953 04:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69467 ']' 00:05:25.953 04:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69467 00:05:25.953 04:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:25.953 04:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.953 04:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69467 00:05:25.953 killing process with pid 69467 00:05:25.953 04:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.953 04:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.953 04:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69467' 00:05:25.953 04:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69467 00:05:25.953 04:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69467 00:05:26.213 04:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:26.213 04:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:26.213 00:05:26.213 real 0m6.980s 00:05:26.213 user 0m6.510s 00:05:26.213 sys 0m0.742s 00:05:26.213 04:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.213 ************************************ 00:05:26.213 END TEST skip_rpc_with_json 00:05:26.213 ************************************ 00:05:26.213 04:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.213 04:54:36 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:26.213 04:54:36 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.213 04:54:36 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.213 04:54:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.213 ************************************ 00:05:26.213 START TEST skip_rpc_with_delay 00:05:26.213 ************************************ 00:05:26.213 04:54:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:26.213 04:54:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.213 04:54:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:26.213 04:54:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.213 04:54:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.213 04:54:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.213 04:54:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.213 04:54:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.213 04:54:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.213 04:54:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.213 04:54:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.213 04:54:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:26.213 04:54:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.213 [2024-12-14 04:54:37.061341] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:26.213 [2024-12-14 04:54:37.061559] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:26.474 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:26.474 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:26.474 ************************************ 00:05:26.474 END TEST skip_rpc_with_delay 00:05:26.474 ************************************ 00:05:26.474 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:26.474 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:26.474 00:05:26.474 real 0m0.165s 00:05:26.474 user 0m0.088s 00:05:26.474 sys 0m0.075s 00:05:26.474 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.474 04:54:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:26.474 04:54:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:26.474 04:54:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:26.474 04:54:37 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:26.474 04:54:37 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.474 04:54:37 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.474 04:54:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.474 ************************************ 00:05:26.474 START TEST exit_on_failed_rpc_init 00:05:26.474 ************************************ 00:05:26.474 04:54:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:26.474 04:54:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69573 00:05:26.474 04:54:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.474 04:54:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69573 00:05:26.474 04:54:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69573 ']' 00:05:26.474 04:54:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.474 04:54:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.474 04:54:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.474 04:54:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.474 04:54:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:26.474 [2024-12-14 04:54:37.296750] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:26.474 [2024-12-14 04:54:37.296894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69573 ] 00:05:26.734 [2024-12-14 04:54:37.456925] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.734 [2024-12-14 04:54:37.505234] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.303 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.303 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:27.303 04:54:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.303 04:54:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.303 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:27.303 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.303 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.303 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.303 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.303 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.303 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.303 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.303 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.303 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:27.303 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.563 [2024-12-14 04:54:38.255980] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:27.563 [2024-12-14 04:54:38.256208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69591 ] 00:05:27.563 [2024-12-14 04:54:38.417461] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.822 [2024-12-14 04:54:38.467592] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.822 [2024-12-14 04:54:38.467768] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:27.822 [2024-12-14 04:54:38.467833] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:27.822 [2024-12-14 04:54:38.467858] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:27.822 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:27.822 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:27.822 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:27.822 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:27.822 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:27.822 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:27.822 04:54:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:27.822 04:54:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69573 00:05:27.822 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69573 ']' 00:05:27.822 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69573 00:05:27.822 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:27.822 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.822 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69573 00:05:27.822 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.822 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.823 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69573' 00:05:27.823 killing process with pid 69573 00:05:27.823 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69573 00:05:27.823 04:54:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69573 00:05:28.393 00:05:28.393 real 0m1.822s 00:05:28.393 user 0m1.992s 00:05:28.393 sys 0m0.527s 00:05:28.393 ************************************ 00:05:28.393 END TEST exit_on_failed_rpc_init 00:05:28.393 ************************************ 00:05:28.393 04:54:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.393 04:54:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:28.393 04:54:39 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:28.393 ************************************ 00:05:28.393 END TEST skip_rpc 00:05:28.393 ************************************ 00:05:28.393 00:05:28.393 real 0m14.921s 00:05:28.393 user 0m13.814s 00:05:28.393 sys 0m1.999s 00:05:28.393 04:54:39 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.393 04:54:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.393 04:54:39 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:28.393 04:54:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.393 04:54:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.393 04:54:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.393 ************************************ 00:05:28.393 START TEST rpc_client 00:05:28.393 ************************************ 00:05:28.393 04:54:39 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:28.393 * Looking for test storage... 00:05:28.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:28.393 04:54:39 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:28.653 04:54:39 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:28.653 04:54:39 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:28.653 04:54:39 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.653 04:54:39 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:28.653 04:54:39 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.653 04:54:39 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:28.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.653 --rc genhtml_branch_coverage=1 00:05:28.653 --rc genhtml_function_coverage=1 00:05:28.653 --rc genhtml_legend=1 00:05:28.653 --rc geninfo_all_blocks=1 00:05:28.653 --rc geninfo_unexecuted_blocks=1 00:05:28.653 00:05:28.653 ' 00:05:28.653 04:54:39 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:28.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.653 --rc genhtml_branch_coverage=1 00:05:28.653 --rc genhtml_function_coverage=1 00:05:28.653 --rc genhtml_legend=1 00:05:28.653 --rc geninfo_all_blocks=1 00:05:28.653 --rc geninfo_unexecuted_blocks=1 00:05:28.653 00:05:28.653 ' 00:05:28.653 04:54:39 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:28.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.653 --rc genhtml_branch_coverage=1 00:05:28.653 --rc genhtml_function_coverage=1 00:05:28.653 --rc genhtml_legend=1 00:05:28.653 --rc geninfo_all_blocks=1 00:05:28.653 --rc geninfo_unexecuted_blocks=1 00:05:28.653 00:05:28.653 ' 00:05:28.653 04:54:39 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:28.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.653 --rc genhtml_branch_coverage=1 00:05:28.653 --rc genhtml_function_coverage=1 00:05:28.653 --rc genhtml_legend=1 00:05:28.653 --rc geninfo_all_blocks=1 00:05:28.653 --rc geninfo_unexecuted_blocks=1 00:05:28.653 00:05:28.653 ' 00:05:28.653 04:54:39 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:28.653 OK 00:05:28.653 04:54:39 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:28.653 00:05:28.653 real 0m0.273s 00:05:28.653 user 0m0.151s 00:05:28.653 sys 0m0.137s 00:05:28.653 04:54:39 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.653 04:54:39 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:28.653 ************************************ 00:05:28.653 END TEST rpc_client 00:05:28.653 ************************************ 00:05:28.653 04:54:39 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:28.653 04:54:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.653 04:54:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.653 04:54:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.653 ************************************ 00:05:28.653 START TEST json_config 00:05:28.653 ************************************ 00:05:28.653 04:54:39 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:28.914 04:54:39 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:28.914 04:54:39 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:28.914 04:54:39 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:28.914 04:54:39 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:28.914 04:54:39 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.914 04:54:39 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.914 04:54:39 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.914 04:54:39 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.914 04:54:39 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.914 04:54:39 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.914 04:54:39 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.914 04:54:39 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.914 04:54:39 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.914 04:54:39 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.914 04:54:39 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.914 04:54:39 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:28.914 04:54:39 json_config -- scripts/common.sh@345 -- # : 1 00:05:28.914 04:54:39 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.914 04:54:39 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.914 04:54:39 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:28.914 04:54:39 json_config -- scripts/common.sh@353 -- # local d=1 00:05:28.914 04:54:39 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.914 04:54:39 json_config -- scripts/common.sh@355 -- # echo 1 00:05:28.914 04:54:39 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.914 04:54:39 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:28.914 04:54:39 json_config -- scripts/common.sh@353 -- # local d=2 00:05:28.914 04:54:39 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.914 04:54:39 json_config -- scripts/common.sh@355 -- # echo 2 00:05:28.914 04:54:39 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.914 04:54:39 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.914 04:54:39 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.914 04:54:39 json_config -- scripts/common.sh@368 -- # return 0 00:05:28.914 04:54:39 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.914 04:54:39 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:28.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.914 --rc genhtml_branch_coverage=1 00:05:28.914 --rc genhtml_function_coverage=1 00:05:28.914 --rc genhtml_legend=1 00:05:28.914 --rc geninfo_all_blocks=1 00:05:28.914 --rc geninfo_unexecuted_blocks=1 00:05:28.914 00:05:28.914 ' 00:05:28.914 04:54:39 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:28.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.914 --rc genhtml_branch_coverage=1 00:05:28.914 --rc genhtml_function_coverage=1 00:05:28.914 --rc genhtml_legend=1 00:05:28.914 --rc geninfo_all_blocks=1 00:05:28.914 --rc geninfo_unexecuted_blocks=1 00:05:28.914 00:05:28.914 ' 00:05:28.914 04:54:39 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:28.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.914 --rc genhtml_branch_coverage=1 00:05:28.914 --rc genhtml_function_coverage=1 00:05:28.914 --rc genhtml_legend=1 00:05:28.914 --rc geninfo_all_blocks=1 00:05:28.914 --rc geninfo_unexecuted_blocks=1 00:05:28.914 00:05:28.914 ' 00:05:28.914 04:54:39 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:28.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.914 --rc genhtml_branch_coverage=1 00:05:28.914 --rc genhtml_function_coverage=1 00:05:28.914 --rc genhtml_legend=1 00:05:28.914 --rc geninfo_all_blocks=1 00:05:28.914 --rc geninfo_unexecuted_blocks=1 00:05:28.914 00:05:28.914 ' 00:05:28.914 04:54:39 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:28.914 04:54:39 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:28.914 04:54:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.914 04:54:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.914 04:54:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.914 04:54:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.914 04:54:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:28.914 04:54:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:28.914 04:54:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.914 04:54:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:28.914 04:54:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.914 04:54:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:28.914 04:54:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:deb23972-aa47-4ab1-8501-74d5b0817ca5 00:05:28.914 04:54:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=deb23972-aa47-4ab1-8501-74d5b0817ca5 00:05:28.914 04:54:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.914 04:54:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:28.914 04:54:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:28.914 04:54:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:28.914 04:54:39 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:28.914 04:54:39 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:28.914 04:54:39 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.914 04:54:39 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.914 04:54:39 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.915 04:54:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.915 04:54:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.915 04:54:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.915 04:54:39 json_config -- paths/export.sh@5 -- # export PATH 00:05:28.915 04:54:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.915 04:54:39 json_config -- nvmf/common.sh@51 -- # : 0 00:05:28.915 04:54:39 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:28.915 04:54:39 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:28.915 04:54:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:28.915 04:54:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.915 04:54:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.915 04:54:39 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:28.915 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:28.915 04:54:39 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:28.915 04:54:39 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:28.915 04:54:39 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:28.915 04:54:39 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:28.915 04:54:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:28.915 04:54:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:28.915 04:54:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:28.915 04:54:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:28.915 04:54:39 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:28.915 WARNING: No tests are enabled so not running JSON configuration tests 00:05:28.915 04:54:39 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:28.915 00:05:28.915 real 0m0.201s 00:05:28.915 user 0m0.112s 00:05:28.915 sys 0m0.094s 00:05:28.915 04:54:39 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.915 04:54:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.915 ************************************ 00:05:28.915 END TEST json_config 00:05:28.915 ************************************ 00:05:28.915 04:54:39 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:28.915 04:54:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.915 04:54:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.915 04:54:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.915 ************************************ 00:05:28.915 START TEST json_config_extra_key 00:05:28.915 ************************************ 00:05:28.915 04:54:39 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:29.176 04:54:39 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:29.176 04:54:39 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:29.176 04:54:39 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:29.176 04:54:39 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.176 04:54:39 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:29.176 04:54:39 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.176 04:54:39 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:29.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.176 --rc genhtml_branch_coverage=1 00:05:29.176 --rc genhtml_function_coverage=1 00:05:29.176 --rc genhtml_legend=1 00:05:29.176 --rc geninfo_all_blocks=1 00:05:29.176 --rc geninfo_unexecuted_blocks=1 00:05:29.176 00:05:29.176 ' 00:05:29.176 04:54:39 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:29.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.176 --rc genhtml_branch_coverage=1 00:05:29.176 --rc genhtml_function_coverage=1 00:05:29.176 --rc genhtml_legend=1 00:05:29.176 --rc geninfo_all_blocks=1 00:05:29.176 --rc geninfo_unexecuted_blocks=1 00:05:29.176 00:05:29.176 ' 00:05:29.176 04:54:39 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:29.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.176 --rc genhtml_branch_coverage=1 00:05:29.176 --rc genhtml_function_coverage=1 00:05:29.176 --rc genhtml_legend=1 00:05:29.176 --rc geninfo_all_blocks=1 00:05:29.176 --rc geninfo_unexecuted_blocks=1 00:05:29.176 00:05:29.176 ' 00:05:29.176 04:54:39 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:29.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.176 --rc genhtml_branch_coverage=1 00:05:29.176 --rc genhtml_function_coverage=1 00:05:29.176 --rc genhtml_legend=1 00:05:29.176 --rc geninfo_all_blocks=1 00:05:29.176 --rc geninfo_unexecuted_blocks=1 00:05:29.176 00:05:29.176 ' 00:05:29.176 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:29.176 04:54:39 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:29.176 04:54:39 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:29.176 04:54:39 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:29.176 04:54:39 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:29.176 04:54:39 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:29.176 04:54:39 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:29.177 04:54:39 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:29.177 04:54:39 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:29.177 04:54:39 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:29.177 04:54:39 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:29.177 04:54:39 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:29.177 04:54:39 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:deb23972-aa47-4ab1-8501-74d5b0817ca5 00:05:29.177 04:54:39 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=deb23972-aa47-4ab1-8501-74d5b0817ca5 00:05:29.177 04:54:39 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:29.177 04:54:39 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:29.177 04:54:39 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:29.177 04:54:39 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:29.177 04:54:39 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:29.177 04:54:39 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:29.177 04:54:39 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.177 04:54:39 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.177 04:54:39 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.177 04:54:39 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.177 04:54:39 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.177 04:54:39 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.177 04:54:39 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:29.177 04:54:39 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.177 04:54:39 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:29.177 04:54:39 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:29.177 04:54:39 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:29.177 04:54:39 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:29.177 04:54:39 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:29.177 04:54:39 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:29.177 04:54:39 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:29.177 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:29.177 04:54:39 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:29.177 04:54:39 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:29.177 04:54:39 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:29.177 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:29.177 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:29.177 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:29.177 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:29.177 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:29.177 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:29.177 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:29.177 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:29.177 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:29.177 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:29.177 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:29.177 INFO: launching applications... 00:05:29.177 04:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:29.177 04:54:39 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:29.177 04:54:39 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:29.177 04:54:39 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:29.177 04:54:39 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:29.177 04:54:39 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:29.177 04:54:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.177 04:54:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.177 04:54:39 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69779 00:05:29.177 04:54:39 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:29.177 Waiting for target to run... 00:05:29.177 04:54:39 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69779 /var/tmp/spdk_tgt.sock 00:05:29.177 04:54:39 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69779 ']' 00:05:29.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:29.177 04:54:39 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:29.177 04:54:39 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.177 04:54:39 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:29.177 04:54:39 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:29.177 04:54:39 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.177 04:54:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:29.177 [2024-12-14 04:54:40.022275] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:29.177 [2024-12-14 04:54:40.022416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69779 ] 00:05:29.747 [2024-12-14 04:54:40.396148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.747 [2024-12-14 04:54:40.428848] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.006 04:54:40 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.006 04:54:40 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:30.006 04:54:40 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:30.006 00:05:30.006 04:54:40 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:30.006 INFO: shutting down applications... 00:05:30.006 04:54:40 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:30.006 04:54:40 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:30.006 04:54:40 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:30.006 04:54:40 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69779 ]] 00:05:30.006 04:54:40 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69779 00:05:30.006 04:54:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:30.006 04:54:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:30.006 04:54:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69779 00:05:30.006 04:54:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:30.576 04:54:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:30.576 04:54:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:30.576 04:54:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69779 00:05:30.576 04:54:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:30.576 04:54:41 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:30.576 04:54:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:30.576 SPDK target shutdown done 00:05:30.576 04:54:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:30.576 Success 00:05:30.576 04:54:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:30.576 00:05:30.576 real 0m1.610s 00:05:30.576 user 0m1.320s 00:05:30.576 sys 0m0.479s 00:05:30.576 04:54:41 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.576 04:54:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:30.576 ************************************ 00:05:30.576 END TEST json_config_extra_key 00:05:30.576 ************************************ 00:05:30.576 04:54:41 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:30.576 04:54:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.576 04:54:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.576 04:54:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.576 ************************************ 00:05:30.576 START TEST alias_rpc 00:05:30.576 ************************************ 00:05:30.576 04:54:41 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:30.835 * Looking for test storage... 00:05:30.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:30.835 04:54:41 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:30.835 04:54:41 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:30.835 04:54:41 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:30.835 04:54:41 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:30.835 04:54:41 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.836 04:54:41 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:30.836 04:54:41 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.836 04:54:41 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:30.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.836 --rc genhtml_branch_coverage=1 00:05:30.836 --rc genhtml_function_coverage=1 00:05:30.836 --rc genhtml_legend=1 00:05:30.836 --rc geninfo_all_blocks=1 00:05:30.836 --rc geninfo_unexecuted_blocks=1 00:05:30.836 00:05:30.836 ' 00:05:30.836 04:54:41 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:30.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.836 --rc genhtml_branch_coverage=1 00:05:30.836 --rc genhtml_function_coverage=1 00:05:30.836 --rc genhtml_legend=1 00:05:30.836 --rc geninfo_all_blocks=1 00:05:30.836 --rc geninfo_unexecuted_blocks=1 00:05:30.836 00:05:30.836 ' 00:05:30.836 04:54:41 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:30.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.836 --rc genhtml_branch_coverage=1 00:05:30.836 --rc genhtml_function_coverage=1 00:05:30.836 --rc genhtml_legend=1 00:05:30.836 --rc geninfo_all_blocks=1 00:05:30.836 --rc geninfo_unexecuted_blocks=1 00:05:30.836 00:05:30.836 ' 00:05:30.836 04:54:41 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:30.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.836 --rc genhtml_branch_coverage=1 00:05:30.836 --rc genhtml_function_coverage=1 00:05:30.836 --rc genhtml_legend=1 00:05:30.836 --rc geninfo_all_blocks=1 00:05:30.836 --rc geninfo_unexecuted_blocks=1 00:05:30.836 00:05:30.836 ' 00:05:30.836 04:54:41 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:30.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.836 04:54:41 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69858 00:05:30.836 04:54:41 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69858 00:05:30.836 04:54:41 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 69858 ']' 00:05:30.836 04:54:41 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.836 04:54:41 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.836 04:54:41 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.836 04:54:41 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.836 04:54:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.836 04:54:41 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.836 [2024-12-14 04:54:41.678870] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:30.836 [2024-12-14 04:54:41.679114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69858 ] 00:05:31.095 [2024-12-14 04:54:41.837336] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.095 [2024-12-14 04:54:41.886778] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.663 04:54:42 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.663 04:54:42 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:31.663 04:54:42 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:31.922 04:54:42 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69858 00:05:31.922 04:54:42 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 69858 ']' 00:05:31.922 04:54:42 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 69858 00:05:31.922 04:54:42 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:31.922 04:54:42 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:31.922 04:54:42 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69858 00:05:31.922 killing process with pid 69858 00:05:31.922 04:54:42 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:31.922 04:54:42 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:31.922 04:54:42 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69858' 00:05:31.922 04:54:42 alias_rpc -- common/autotest_common.sh@969 -- # kill 69858 00:05:31.922 04:54:42 alias_rpc -- common/autotest_common.sh@974 -- # wait 69858 00:05:32.491 00:05:32.491 real 0m1.699s 00:05:32.491 user 0m1.669s 00:05:32.491 sys 0m0.498s 00:05:32.491 04:54:43 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.491 04:54:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.491 ************************************ 00:05:32.491 END TEST alias_rpc 00:05:32.491 ************************************ 00:05:32.491 04:54:43 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:32.491 04:54:43 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:32.491 04:54:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.491 04:54:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.491 04:54:43 -- common/autotest_common.sh@10 -- # set +x 00:05:32.491 ************************************ 00:05:32.491 START TEST spdkcli_tcp 00:05:32.491 ************************************ 00:05:32.491 04:54:43 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:32.491 * Looking for test storage... 00:05:32.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:32.491 04:54:43 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:32.491 04:54:43 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:32.492 04:54:43 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:32.492 04:54:43 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.492 04:54:43 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:32.492 04:54:43 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.492 04:54:43 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:32.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.492 --rc genhtml_branch_coverage=1 00:05:32.492 --rc genhtml_function_coverage=1 00:05:32.492 --rc genhtml_legend=1 00:05:32.492 --rc geninfo_all_blocks=1 00:05:32.492 --rc geninfo_unexecuted_blocks=1 00:05:32.492 00:05:32.492 ' 00:05:32.492 04:54:43 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:32.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.492 --rc genhtml_branch_coverage=1 00:05:32.492 --rc genhtml_function_coverage=1 00:05:32.492 --rc genhtml_legend=1 00:05:32.492 --rc geninfo_all_blocks=1 00:05:32.492 --rc geninfo_unexecuted_blocks=1 00:05:32.492 00:05:32.492 ' 00:05:32.492 04:54:43 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:32.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.492 --rc genhtml_branch_coverage=1 00:05:32.492 --rc genhtml_function_coverage=1 00:05:32.492 --rc genhtml_legend=1 00:05:32.492 --rc geninfo_all_blocks=1 00:05:32.492 --rc geninfo_unexecuted_blocks=1 00:05:32.492 00:05:32.492 ' 00:05:32.492 04:54:43 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:32.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.492 --rc genhtml_branch_coverage=1 00:05:32.492 --rc genhtml_function_coverage=1 00:05:32.492 --rc genhtml_legend=1 00:05:32.492 --rc geninfo_all_blocks=1 00:05:32.492 --rc geninfo_unexecuted_blocks=1 00:05:32.492 00:05:32.492 ' 00:05:32.492 04:54:43 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:32.492 04:54:43 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:32.492 04:54:43 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:32.492 04:54:43 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:32.492 04:54:43 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:32.492 04:54:43 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:32.492 04:54:43 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:32.492 04:54:43 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:32.492 04:54:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:32.492 04:54:43 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69932 00:05:32.492 04:54:43 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69932 00:05:32.492 04:54:43 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 69932 ']' 00:05:32.492 04:54:43 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.492 04:54:43 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.492 04:54:43 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.492 04:54:43 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.492 04:54:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:32.492 04:54:43 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:32.752 [2024-12-14 04:54:43.442969] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:32.752 [2024-12-14 04:54:43.443084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69932 ] 00:05:32.752 [2024-12-14 04:54:43.604079] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.011 [2024-12-14 04:54:43.654739] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.011 [2024-12-14 04:54:43.654848] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.579 04:54:44 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.579 04:54:44 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:33.579 04:54:44 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69949 00:05:33.579 04:54:44 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:33.579 04:54:44 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:33.579 [ 00:05:33.579 "bdev_malloc_delete", 00:05:33.579 "bdev_malloc_create", 00:05:33.579 "bdev_null_resize", 00:05:33.579 "bdev_null_delete", 00:05:33.579 "bdev_null_create", 00:05:33.579 "bdev_nvme_cuse_unregister", 00:05:33.579 "bdev_nvme_cuse_register", 00:05:33.579 "bdev_opal_new_user", 00:05:33.579 "bdev_opal_set_lock_state", 00:05:33.579 "bdev_opal_delete", 00:05:33.579 "bdev_opal_get_info", 00:05:33.579 "bdev_opal_create", 00:05:33.579 "bdev_nvme_opal_revert", 00:05:33.579 "bdev_nvme_opal_init", 00:05:33.579 "bdev_nvme_send_cmd", 00:05:33.579 "bdev_nvme_set_keys", 00:05:33.579 "bdev_nvme_get_path_iostat", 00:05:33.579 "bdev_nvme_get_mdns_discovery_info", 00:05:33.579 "bdev_nvme_stop_mdns_discovery", 00:05:33.579 "bdev_nvme_start_mdns_discovery", 00:05:33.579 "bdev_nvme_set_multipath_policy", 00:05:33.579 "bdev_nvme_set_preferred_path", 00:05:33.579 "bdev_nvme_get_io_paths", 00:05:33.579 "bdev_nvme_remove_error_injection", 00:05:33.579 "bdev_nvme_add_error_injection", 00:05:33.579 "bdev_nvme_get_discovery_info", 00:05:33.579 "bdev_nvme_stop_discovery", 00:05:33.579 "bdev_nvme_start_discovery", 00:05:33.579 "bdev_nvme_get_controller_health_info", 00:05:33.579 "bdev_nvme_disable_controller", 00:05:33.579 "bdev_nvme_enable_controller", 00:05:33.579 "bdev_nvme_reset_controller", 00:05:33.579 "bdev_nvme_get_transport_statistics", 00:05:33.579 "bdev_nvme_apply_firmware", 00:05:33.579 "bdev_nvme_detach_controller", 00:05:33.579 "bdev_nvme_get_controllers", 00:05:33.579 "bdev_nvme_attach_controller", 00:05:33.579 "bdev_nvme_set_hotplug", 00:05:33.579 "bdev_nvme_set_options", 00:05:33.579 "bdev_passthru_delete", 00:05:33.579 "bdev_passthru_create", 00:05:33.579 "bdev_lvol_set_parent_bdev", 00:05:33.579 "bdev_lvol_set_parent", 00:05:33.579 "bdev_lvol_check_shallow_copy", 00:05:33.579 "bdev_lvol_start_shallow_copy", 00:05:33.579 "bdev_lvol_grow_lvstore", 00:05:33.579 "bdev_lvol_get_lvols", 00:05:33.579 "bdev_lvol_get_lvstores", 00:05:33.579 "bdev_lvol_delete", 00:05:33.579 "bdev_lvol_set_read_only", 00:05:33.579 "bdev_lvol_resize", 00:05:33.579 "bdev_lvol_decouple_parent", 00:05:33.579 "bdev_lvol_inflate", 00:05:33.579 "bdev_lvol_rename", 00:05:33.579 "bdev_lvol_clone_bdev", 00:05:33.579 "bdev_lvol_clone", 00:05:33.579 "bdev_lvol_snapshot", 00:05:33.579 "bdev_lvol_create", 00:05:33.579 "bdev_lvol_delete_lvstore", 00:05:33.579 "bdev_lvol_rename_lvstore", 00:05:33.579 "bdev_lvol_create_lvstore", 00:05:33.579 "bdev_raid_set_options", 00:05:33.579 "bdev_raid_remove_base_bdev", 00:05:33.579 "bdev_raid_add_base_bdev", 00:05:33.579 "bdev_raid_delete", 00:05:33.579 "bdev_raid_create", 00:05:33.579 "bdev_raid_get_bdevs", 00:05:33.579 "bdev_error_inject_error", 00:05:33.579 "bdev_error_delete", 00:05:33.579 "bdev_error_create", 00:05:33.579 "bdev_split_delete", 00:05:33.579 "bdev_split_create", 00:05:33.579 "bdev_delay_delete", 00:05:33.579 "bdev_delay_create", 00:05:33.579 "bdev_delay_update_latency", 00:05:33.579 "bdev_zone_block_delete", 00:05:33.579 "bdev_zone_block_create", 00:05:33.579 "blobfs_create", 00:05:33.579 "blobfs_detect", 00:05:33.579 "blobfs_set_cache_size", 00:05:33.579 "bdev_aio_delete", 00:05:33.579 "bdev_aio_rescan", 00:05:33.579 "bdev_aio_create", 00:05:33.579 "bdev_ftl_set_property", 00:05:33.579 "bdev_ftl_get_properties", 00:05:33.579 "bdev_ftl_get_stats", 00:05:33.579 "bdev_ftl_unmap", 00:05:33.579 "bdev_ftl_unload", 00:05:33.579 "bdev_ftl_delete", 00:05:33.579 "bdev_ftl_load", 00:05:33.579 "bdev_ftl_create", 00:05:33.579 "bdev_virtio_attach_controller", 00:05:33.579 "bdev_virtio_scsi_get_devices", 00:05:33.579 "bdev_virtio_detach_controller", 00:05:33.579 "bdev_virtio_blk_set_hotplug", 00:05:33.579 "bdev_iscsi_delete", 00:05:33.579 "bdev_iscsi_create", 00:05:33.579 "bdev_iscsi_set_options", 00:05:33.579 "accel_error_inject_error", 00:05:33.579 "ioat_scan_accel_module", 00:05:33.579 "dsa_scan_accel_module", 00:05:33.579 "iaa_scan_accel_module", 00:05:33.579 "keyring_file_remove_key", 00:05:33.579 "keyring_file_add_key", 00:05:33.579 "keyring_linux_set_options", 00:05:33.579 "fsdev_aio_delete", 00:05:33.579 "fsdev_aio_create", 00:05:33.579 "iscsi_get_histogram", 00:05:33.579 "iscsi_enable_histogram", 00:05:33.579 "iscsi_set_options", 00:05:33.579 "iscsi_get_auth_groups", 00:05:33.579 "iscsi_auth_group_remove_secret", 00:05:33.579 "iscsi_auth_group_add_secret", 00:05:33.579 "iscsi_delete_auth_group", 00:05:33.579 "iscsi_create_auth_group", 00:05:33.579 "iscsi_set_discovery_auth", 00:05:33.579 "iscsi_get_options", 00:05:33.579 "iscsi_target_node_request_logout", 00:05:33.579 "iscsi_target_node_set_redirect", 00:05:33.579 "iscsi_target_node_set_auth", 00:05:33.579 "iscsi_target_node_add_lun", 00:05:33.579 "iscsi_get_stats", 00:05:33.579 "iscsi_get_connections", 00:05:33.579 "iscsi_portal_group_set_auth", 00:05:33.579 "iscsi_start_portal_group", 00:05:33.579 "iscsi_delete_portal_group", 00:05:33.579 "iscsi_create_portal_group", 00:05:33.579 "iscsi_get_portal_groups", 00:05:33.579 "iscsi_delete_target_node", 00:05:33.579 "iscsi_target_node_remove_pg_ig_maps", 00:05:33.579 "iscsi_target_node_add_pg_ig_maps", 00:05:33.579 "iscsi_create_target_node", 00:05:33.579 "iscsi_get_target_nodes", 00:05:33.579 "iscsi_delete_initiator_group", 00:05:33.579 "iscsi_initiator_group_remove_initiators", 00:05:33.579 "iscsi_initiator_group_add_initiators", 00:05:33.579 "iscsi_create_initiator_group", 00:05:33.579 "iscsi_get_initiator_groups", 00:05:33.579 "nvmf_set_crdt", 00:05:33.579 "nvmf_set_config", 00:05:33.579 "nvmf_set_max_subsystems", 00:05:33.579 "nvmf_stop_mdns_prr", 00:05:33.579 "nvmf_publish_mdns_prr", 00:05:33.579 "nvmf_subsystem_get_listeners", 00:05:33.579 "nvmf_subsystem_get_qpairs", 00:05:33.579 "nvmf_subsystem_get_controllers", 00:05:33.579 "nvmf_get_stats", 00:05:33.579 "nvmf_get_transports", 00:05:33.579 "nvmf_create_transport", 00:05:33.579 "nvmf_get_targets", 00:05:33.579 "nvmf_delete_target", 00:05:33.579 "nvmf_create_target", 00:05:33.579 "nvmf_subsystem_allow_any_host", 00:05:33.579 "nvmf_subsystem_set_keys", 00:05:33.579 "nvmf_subsystem_remove_host", 00:05:33.579 "nvmf_subsystem_add_host", 00:05:33.579 "nvmf_ns_remove_host", 00:05:33.579 "nvmf_ns_add_host", 00:05:33.579 "nvmf_subsystem_remove_ns", 00:05:33.579 "nvmf_subsystem_set_ns_ana_group", 00:05:33.579 "nvmf_subsystem_add_ns", 00:05:33.579 "nvmf_subsystem_listener_set_ana_state", 00:05:33.579 "nvmf_discovery_get_referrals", 00:05:33.579 "nvmf_discovery_remove_referral", 00:05:33.579 "nvmf_discovery_add_referral", 00:05:33.579 "nvmf_subsystem_remove_listener", 00:05:33.579 "nvmf_subsystem_add_listener", 00:05:33.579 "nvmf_delete_subsystem", 00:05:33.580 "nvmf_create_subsystem", 00:05:33.580 "nvmf_get_subsystems", 00:05:33.580 "env_dpdk_get_mem_stats", 00:05:33.580 "nbd_get_disks", 00:05:33.580 "nbd_stop_disk", 00:05:33.580 "nbd_start_disk", 00:05:33.580 "ublk_recover_disk", 00:05:33.580 "ublk_get_disks", 00:05:33.580 "ublk_stop_disk", 00:05:33.580 "ublk_start_disk", 00:05:33.580 "ublk_destroy_target", 00:05:33.580 "ublk_create_target", 00:05:33.580 "virtio_blk_create_transport", 00:05:33.580 "virtio_blk_get_transports", 00:05:33.580 "vhost_controller_set_coalescing", 00:05:33.580 "vhost_get_controllers", 00:05:33.580 "vhost_delete_controller", 00:05:33.580 "vhost_create_blk_controller", 00:05:33.580 "vhost_scsi_controller_remove_target", 00:05:33.580 "vhost_scsi_controller_add_target", 00:05:33.580 "vhost_start_scsi_controller", 00:05:33.580 "vhost_create_scsi_controller", 00:05:33.580 "thread_set_cpumask", 00:05:33.580 "scheduler_set_options", 00:05:33.580 "framework_get_governor", 00:05:33.580 "framework_get_scheduler", 00:05:33.580 "framework_set_scheduler", 00:05:33.580 "framework_get_reactors", 00:05:33.580 "thread_get_io_channels", 00:05:33.580 "thread_get_pollers", 00:05:33.580 "thread_get_stats", 00:05:33.580 "framework_monitor_context_switch", 00:05:33.580 "spdk_kill_instance", 00:05:33.580 "log_enable_timestamps", 00:05:33.580 "log_get_flags", 00:05:33.580 "log_clear_flag", 00:05:33.580 "log_set_flag", 00:05:33.580 "log_get_level", 00:05:33.580 "log_set_level", 00:05:33.580 "log_get_print_level", 00:05:33.580 "log_set_print_level", 00:05:33.580 "framework_enable_cpumask_locks", 00:05:33.580 "framework_disable_cpumask_locks", 00:05:33.580 "framework_wait_init", 00:05:33.580 "framework_start_init", 00:05:33.580 "scsi_get_devices", 00:05:33.580 "bdev_get_histogram", 00:05:33.580 "bdev_enable_histogram", 00:05:33.580 "bdev_set_qos_limit", 00:05:33.580 "bdev_set_qd_sampling_period", 00:05:33.580 "bdev_get_bdevs", 00:05:33.580 "bdev_reset_iostat", 00:05:33.580 "bdev_get_iostat", 00:05:33.580 "bdev_examine", 00:05:33.580 "bdev_wait_for_examine", 00:05:33.580 "bdev_set_options", 00:05:33.580 "accel_get_stats", 00:05:33.580 "accel_set_options", 00:05:33.580 "accel_set_driver", 00:05:33.580 "accel_crypto_key_destroy", 00:05:33.580 "accel_crypto_keys_get", 00:05:33.580 "accel_crypto_key_create", 00:05:33.580 "accel_assign_opc", 00:05:33.580 "accel_get_module_info", 00:05:33.580 "accel_get_opc_assignments", 00:05:33.580 "vmd_rescan", 00:05:33.580 "vmd_remove_device", 00:05:33.580 "vmd_enable", 00:05:33.580 "sock_get_default_impl", 00:05:33.580 "sock_set_default_impl", 00:05:33.580 "sock_impl_set_options", 00:05:33.580 "sock_impl_get_options", 00:05:33.580 "iobuf_get_stats", 00:05:33.580 "iobuf_set_options", 00:05:33.580 "keyring_get_keys", 00:05:33.580 "framework_get_pci_devices", 00:05:33.580 "framework_get_config", 00:05:33.580 "framework_get_subsystems", 00:05:33.580 "fsdev_set_opts", 00:05:33.580 "fsdev_get_opts", 00:05:33.580 "trace_get_info", 00:05:33.580 "trace_get_tpoint_group_mask", 00:05:33.580 "trace_disable_tpoint_group", 00:05:33.580 "trace_enable_tpoint_group", 00:05:33.580 "trace_clear_tpoint_mask", 00:05:33.580 "trace_set_tpoint_mask", 00:05:33.580 "notify_get_notifications", 00:05:33.580 "notify_get_types", 00:05:33.580 "spdk_get_version", 00:05:33.580 "rpc_get_methods" 00:05:33.580 ] 00:05:33.580 04:54:44 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:33.580 04:54:44 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:33.580 04:54:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.839 04:54:44 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:33.839 04:54:44 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69932 00:05:33.839 04:54:44 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 69932 ']' 00:05:33.839 04:54:44 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 69932 00:05:33.840 04:54:44 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:33.840 04:54:44 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.840 04:54:44 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69932 00:05:33.840 killing process with pid 69932 00:05:33.840 04:54:44 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.840 04:54:44 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.840 04:54:44 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69932' 00:05:33.840 04:54:44 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 69932 00:05:33.840 04:54:44 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 69932 00:05:34.099 00:05:34.099 real 0m1.760s 00:05:34.099 user 0m2.893s 00:05:34.099 sys 0m0.543s 00:05:34.099 04:54:44 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.099 04:54:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.099 ************************************ 00:05:34.099 END TEST spdkcli_tcp 00:05:34.099 ************************************ 00:05:34.099 04:54:44 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:34.099 04:54:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.099 04:54:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.099 04:54:44 -- common/autotest_common.sh@10 -- # set +x 00:05:34.361 ************************************ 00:05:34.361 START TEST dpdk_mem_utility 00:05:34.361 ************************************ 00:05:34.361 04:54:44 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:34.361 * Looking for test storage... 00:05:34.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:34.361 04:54:45 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:34.361 04:54:45 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:34.361 04:54:45 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:34.361 04:54:45 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.361 04:54:45 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:34.361 04:54:45 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.361 04:54:45 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:34.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.361 --rc genhtml_branch_coverage=1 00:05:34.361 --rc genhtml_function_coverage=1 00:05:34.361 --rc genhtml_legend=1 00:05:34.361 --rc geninfo_all_blocks=1 00:05:34.361 --rc geninfo_unexecuted_blocks=1 00:05:34.361 00:05:34.361 ' 00:05:34.361 04:54:45 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:34.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.361 --rc genhtml_branch_coverage=1 00:05:34.361 --rc genhtml_function_coverage=1 00:05:34.361 --rc genhtml_legend=1 00:05:34.361 --rc geninfo_all_blocks=1 00:05:34.361 --rc geninfo_unexecuted_blocks=1 00:05:34.361 00:05:34.361 ' 00:05:34.361 04:54:45 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:34.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.361 --rc genhtml_branch_coverage=1 00:05:34.361 --rc genhtml_function_coverage=1 00:05:34.361 --rc genhtml_legend=1 00:05:34.361 --rc geninfo_all_blocks=1 00:05:34.361 --rc geninfo_unexecuted_blocks=1 00:05:34.361 00:05:34.361 ' 00:05:34.361 04:54:45 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:34.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.361 --rc genhtml_branch_coverage=1 00:05:34.361 --rc genhtml_function_coverage=1 00:05:34.361 --rc genhtml_legend=1 00:05:34.361 --rc geninfo_all_blocks=1 00:05:34.361 --rc geninfo_unexecuted_blocks=1 00:05:34.361 00:05:34.361 ' 00:05:34.361 04:54:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:34.361 04:54:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70032 00:05:34.361 04:54:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.361 04:54:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70032 00:05:34.361 04:54:45 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70032 ']' 00:05:34.361 04:54:45 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.361 04:54:45 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:34.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.361 04:54:45 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.361 04:54:45 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:34.361 04:54:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:34.621 [2024-12-14 04:54:45.274061] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:34.622 [2024-12-14 04:54:45.274220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70032 ] 00:05:34.622 [2024-12-14 04:54:45.434217] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.622 [2024-12-14 04:54:45.483830] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.561 04:54:46 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.561 04:54:46 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:35.561 04:54:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:35.561 04:54:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:35.561 04:54:46 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.561 04:54:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:35.561 { 00:05:35.561 "filename": "/tmp/spdk_mem_dump.txt" 00:05:35.561 } 00:05:35.561 04:54:46 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.561 04:54:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:35.561 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:35.561 1 heaps totaling size 860.000000 MiB 00:05:35.561 size: 860.000000 MiB heap id: 0 00:05:35.561 end heaps---------- 00:05:35.561 9 mempools totaling size 642.649841 MiB 00:05:35.561 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:35.561 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:35.561 size: 92.545471 MiB name: bdev_io_70032 00:05:35.561 size: 51.011292 MiB name: evtpool_70032 00:05:35.561 size: 50.003479 MiB name: msgpool_70032 00:05:35.561 size: 36.509338 MiB name: fsdev_io_70032 00:05:35.561 size: 21.763794 MiB name: PDU_Pool 00:05:35.561 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:35.561 size: 0.026123 MiB name: Session_Pool 00:05:35.561 end mempools------- 00:05:35.561 6 memzones totaling size 4.142822 MiB 00:05:35.561 size: 1.000366 MiB name: RG_ring_0_70032 00:05:35.561 size: 1.000366 MiB name: RG_ring_1_70032 00:05:35.561 size: 1.000366 MiB name: RG_ring_4_70032 00:05:35.561 size: 1.000366 MiB name: RG_ring_5_70032 00:05:35.561 size: 0.125366 MiB name: RG_ring_2_70032 00:05:35.561 size: 0.015991 MiB name: RG_ring_3_70032 00:05:35.561 end memzones------- 00:05:35.561 04:54:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:35.561 heap id: 0 total size: 860.000000 MiB number of busy elements: 304 number of free elements: 16 00:05:35.561 list of free elements. size: 13.937073 MiB 00:05:35.561 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:35.561 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:35.561 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:35.561 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:35.561 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:35.561 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:35.561 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:35.561 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:35.561 element at address: 0x200000200000 with size: 0.834839 MiB 00:05:35.561 element at address: 0x20001d800000 with size: 0.567871 MiB 00:05:35.561 element at address: 0x200003e00000 with size: 0.489563 MiB 00:05:35.561 element at address: 0x20000d800000 with size: 0.489441 MiB 00:05:35.561 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:35.561 element at address: 0x200007000000 with size: 0.480469 MiB 00:05:35.561 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:05:35.561 element at address: 0x200003a00000 with size: 0.352112 MiB 00:05:35.561 list of standard malloc elements. size: 199.266235 MiB 00:05:35.561 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:35.561 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:35.561 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:35.561 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:35.561 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:35.561 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:35.561 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:35.561 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:35.561 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:35.561 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:35.561 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:35.561 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:35.561 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:35.561 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:35.561 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:35.561 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:35.561 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:35.561 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003a5a240 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003a5e700 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003a7e9c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003a7ea80 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003a7eb40 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003a7ec00 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20000707b000 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20000707b180 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20000707b240 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20000707b300 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20000707b480 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20000707b540 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:35.562 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:35.562 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d891600 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d891780 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d891840 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d891900 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d892080 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d892140 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d892200 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d892380 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d892440 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d892500 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d892680 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d892740 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d892800 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d892980 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:05:35.562 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d893040 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d893100 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d893280 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d893340 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d893400 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d893580 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d893640 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d893700 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d893880 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d893940 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d894000 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d894180 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d894240 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d894300 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d894480 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d894540 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d894600 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d894780 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d894840 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d894900 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d895080 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d895140 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d895200 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:35.563 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:35.563 list of memzone associated elements. size: 646.796692 MiB 00:05:35.563 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:35.563 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:35.563 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:35.563 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:35.563 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:35.563 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70032_0 00:05:35.563 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:35.563 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70032_0 00:05:35.563 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:35.563 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70032_0 00:05:35.563 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:35.563 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70032_0 00:05:35.563 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:35.563 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:35.563 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:35.563 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:35.563 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:35.563 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70032 00:05:35.563 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:35.564 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70032 00:05:35.564 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:35.564 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70032 00:05:35.564 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:35.564 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:35.564 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:35.564 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:35.564 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:35.564 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:35.564 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:35.564 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:35.564 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:35.564 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70032 00:05:35.564 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:35.564 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70032 00:05:35.564 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:35.564 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70032 00:05:35.564 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:35.564 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70032 00:05:35.564 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:35.564 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70032 00:05:35.564 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:35.564 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70032 00:05:35.564 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:35.564 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:35.564 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:35.564 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:35.564 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:35.564 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:35.564 element at address: 0x200003a5e7c0 with size: 0.125488 MiB 00:05:35.564 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70032 00:05:35.564 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:35.564 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:35.564 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:05:35.564 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:35.564 element at address: 0x200003a5a500 with size: 0.016113 MiB 00:05:35.564 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70032 00:05:35.564 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:05:35.564 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:35.564 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:35.564 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70032 00:05:35.564 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:35.564 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70032 00:05:35.564 element at address: 0x200003a5a300 with size: 0.000305 MiB 00:05:35.564 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70032 00:05:35.564 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:05:35.564 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:35.564 04:54:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:35.564 04:54:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70032 00:05:35.564 04:54:46 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70032 ']' 00:05:35.564 04:54:46 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70032 00:05:35.564 04:54:46 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:35.564 04:54:46 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:35.564 04:54:46 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70032 00:05:35.564 04:54:46 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:35.564 04:54:46 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:35.564 killing process with pid 70032 00:05:35.564 04:54:46 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70032' 00:05:35.564 04:54:46 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70032 00:05:35.564 04:54:46 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70032 00:05:35.824 00:05:35.824 real 0m1.628s 00:05:35.824 user 0m1.572s 00:05:35.824 sys 0m0.487s 00:05:35.824 04:54:46 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.824 04:54:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:35.824 ************************************ 00:05:35.824 END TEST dpdk_mem_utility 00:05:35.824 ************************************ 00:05:35.824 04:54:46 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:35.824 04:54:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.824 04:54:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.824 04:54:46 -- common/autotest_common.sh@10 -- # set +x 00:05:35.824 ************************************ 00:05:35.824 START TEST event 00:05:35.824 ************************************ 00:05:35.824 04:54:46 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:36.084 * Looking for test storage... 00:05:36.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:36.084 04:54:46 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:36.084 04:54:46 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:36.084 04:54:46 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:36.084 04:54:46 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:36.084 04:54:46 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.084 04:54:46 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.084 04:54:46 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.084 04:54:46 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.084 04:54:46 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.084 04:54:46 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.084 04:54:46 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.084 04:54:46 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.084 04:54:46 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.084 04:54:46 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.084 04:54:46 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.084 04:54:46 event -- scripts/common.sh@344 -- # case "$op" in 00:05:36.084 04:54:46 event -- scripts/common.sh@345 -- # : 1 00:05:36.084 04:54:46 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.084 04:54:46 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.084 04:54:46 event -- scripts/common.sh@365 -- # decimal 1 00:05:36.084 04:54:46 event -- scripts/common.sh@353 -- # local d=1 00:05:36.084 04:54:46 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.084 04:54:46 event -- scripts/common.sh@355 -- # echo 1 00:05:36.084 04:54:46 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.084 04:54:46 event -- scripts/common.sh@366 -- # decimal 2 00:05:36.084 04:54:46 event -- scripts/common.sh@353 -- # local d=2 00:05:36.084 04:54:46 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.084 04:54:46 event -- scripts/common.sh@355 -- # echo 2 00:05:36.084 04:54:46 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.084 04:54:46 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.084 04:54:46 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.084 04:54:46 event -- scripts/common.sh@368 -- # return 0 00:05:36.084 04:54:46 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.084 04:54:46 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:36.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.084 --rc genhtml_branch_coverage=1 00:05:36.084 --rc genhtml_function_coverage=1 00:05:36.084 --rc genhtml_legend=1 00:05:36.084 --rc geninfo_all_blocks=1 00:05:36.084 --rc geninfo_unexecuted_blocks=1 00:05:36.084 00:05:36.084 ' 00:05:36.084 04:54:46 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:36.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.084 --rc genhtml_branch_coverage=1 00:05:36.084 --rc genhtml_function_coverage=1 00:05:36.084 --rc genhtml_legend=1 00:05:36.084 --rc geninfo_all_blocks=1 00:05:36.084 --rc geninfo_unexecuted_blocks=1 00:05:36.084 00:05:36.084 ' 00:05:36.084 04:54:46 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:36.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.084 --rc genhtml_branch_coverage=1 00:05:36.084 --rc genhtml_function_coverage=1 00:05:36.084 --rc genhtml_legend=1 00:05:36.084 --rc geninfo_all_blocks=1 00:05:36.084 --rc geninfo_unexecuted_blocks=1 00:05:36.084 00:05:36.084 ' 00:05:36.084 04:54:46 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:36.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.084 --rc genhtml_branch_coverage=1 00:05:36.084 --rc genhtml_function_coverage=1 00:05:36.084 --rc genhtml_legend=1 00:05:36.084 --rc geninfo_all_blocks=1 00:05:36.084 --rc geninfo_unexecuted_blocks=1 00:05:36.084 00:05:36.084 ' 00:05:36.084 04:54:46 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:36.084 04:54:46 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:36.084 04:54:46 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:36.084 04:54:46 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:36.084 04:54:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.084 04:54:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.084 ************************************ 00:05:36.084 START TEST event_perf 00:05:36.084 ************************************ 00:05:36.084 04:54:46 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:36.084 Running I/O for 1 seconds...[2024-12-14 04:54:46.937204] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:36.084 [2024-12-14 04:54:46.937623] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70118 ] 00:05:36.406 [2024-12-14 04:54:47.095512] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.407 [2024-12-14 04:54:47.155998] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.407 [2024-12-14 04:54:47.156255] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.407 [2024-12-14 04:54:47.156267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.407 [2024-12-14 04:54:47.156382] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.809 Running I/O for 1 seconds... 00:05:37.809 lcore 0: 133774 00:05:37.809 lcore 1: 133773 00:05:37.809 lcore 2: 133775 00:05:37.809 lcore 3: 133773 00:05:37.809 done. 00:05:37.809 00:05:37.809 real 0m1.409s 00:05:37.809 user 0m4.148s 00:05:37.809 sys 0m0.138s 00:05:37.809 04:54:48 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.809 04:54:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.809 ************************************ 00:05:37.809 END TEST event_perf 00:05:37.809 ************************************ 00:05:37.809 04:54:48 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:37.809 04:54:48 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:37.809 04:54:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.809 04:54:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.809 ************************************ 00:05:37.809 START TEST event_reactor 00:05:37.809 ************************************ 00:05:37.809 04:54:48 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:37.809 [2024-12-14 04:54:48.407054] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:37.809 [2024-12-14 04:54:48.407176] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70152 ] 00:05:37.809 [2024-12-14 04:54:48.566271] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.809 [2024-12-14 04:54:48.638064] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.189 test_start 00:05:39.189 oneshot 00:05:39.189 tick 100 00:05:39.189 tick 100 00:05:39.189 tick 250 00:05:39.189 tick 100 00:05:39.189 tick 100 00:05:39.189 tick 100 00:05:39.189 tick 250 00:05:39.189 tick 500 00:05:39.189 tick 100 00:05:39.189 tick 100 00:05:39.189 tick 250 00:05:39.189 tick 100 00:05:39.189 tick 100 00:05:39.189 test_end 00:05:39.189 00:05:39.189 real 0m1.413s 00:05:39.189 user 0m1.193s 00:05:39.189 sys 0m0.111s 00:05:39.189 04:54:49 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.189 04:54:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:39.189 ************************************ 00:05:39.189 END TEST event_reactor 00:05:39.189 ************************************ 00:05:39.189 04:54:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:39.189 04:54:49 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:39.189 04:54:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.189 04:54:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.189 ************************************ 00:05:39.189 START TEST event_reactor_perf 00:05:39.189 ************************************ 00:05:39.189 04:54:49 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:39.189 [2024-12-14 04:54:49.881763] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:39.189 [2024-12-14 04:54:49.881896] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70194 ] 00:05:39.189 [2024-12-14 04:54:50.041158] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.449 [2024-12-14 04:54:50.132134] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.387 test_start 00:05:40.387 test_end 00:05:40.387 Performance: 381095 events per second 00:05:40.646 00:05:40.646 real 0m1.430s 00:05:40.646 user 0m1.189s 00:05:40.646 sys 0m0.132s 00:05:40.646 04:54:51 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.646 04:54:51 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:40.646 ************************************ 00:05:40.646 END TEST event_reactor_perf 00:05:40.646 ************************************ 00:05:40.646 04:54:51 event -- event/event.sh@49 -- # uname -s 00:05:40.646 04:54:51 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:40.646 04:54:51 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:40.646 04:54:51 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.646 04:54:51 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.646 04:54:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.646 ************************************ 00:05:40.646 START TEST event_scheduler 00:05:40.646 ************************************ 00:05:40.646 04:54:51 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:40.646 * Looking for test storage... 00:05:40.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:40.646 04:54:51 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:40.646 04:54:51 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:40.646 04:54:51 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:40.905 04:54:51 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.905 04:54:51 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:40.905 04:54:51 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.905 04:54:51 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:40.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.905 --rc genhtml_branch_coverage=1 00:05:40.905 --rc genhtml_function_coverage=1 00:05:40.905 --rc genhtml_legend=1 00:05:40.905 --rc geninfo_all_blocks=1 00:05:40.905 --rc geninfo_unexecuted_blocks=1 00:05:40.905 00:05:40.905 ' 00:05:40.905 04:54:51 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:40.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.905 --rc genhtml_branch_coverage=1 00:05:40.905 --rc genhtml_function_coverage=1 00:05:40.905 --rc genhtml_legend=1 00:05:40.905 --rc geninfo_all_blocks=1 00:05:40.905 --rc geninfo_unexecuted_blocks=1 00:05:40.905 00:05:40.905 ' 00:05:40.905 04:54:51 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:40.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.905 --rc genhtml_branch_coverage=1 00:05:40.905 --rc genhtml_function_coverage=1 00:05:40.905 --rc genhtml_legend=1 00:05:40.905 --rc geninfo_all_blocks=1 00:05:40.905 --rc geninfo_unexecuted_blocks=1 00:05:40.905 00:05:40.905 ' 00:05:40.905 04:54:51 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:40.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.905 --rc genhtml_branch_coverage=1 00:05:40.905 --rc genhtml_function_coverage=1 00:05:40.905 --rc genhtml_legend=1 00:05:40.905 --rc geninfo_all_blocks=1 00:05:40.905 --rc geninfo_unexecuted_blocks=1 00:05:40.905 00:05:40.905 ' 00:05:40.905 04:54:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:40.905 04:54:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70259 00:05:40.905 04:54:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:40.905 04:54:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.905 04:54:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70259 00:05:40.905 04:54:51 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70259 ']' 00:05:40.905 04:54:51 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.905 04:54:51 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.905 04:54:51 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.905 04:54:51 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.905 04:54:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.905 [2024-12-14 04:54:51.632563] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:40.905 [2024-12-14 04:54:51.632701] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70259 ] 00:05:41.163 [2024-12-14 04:54:51.793698] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:41.163 [2024-12-14 04:54:51.842599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.163 [2024-12-14 04:54:51.842896] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.163 [2024-12-14 04:54:51.843043] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.163 [2024-12-14 04:54:51.842921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.730 04:54:52 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.730 04:54:52 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:41.730 04:54:52 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:41.730 04:54:52 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.730 04:54:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.730 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:41.730 POWER: Cannot set governor of lcore 0 to userspace 00:05:41.730 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:41.730 POWER: Cannot set governor of lcore 0 to performance 00:05:41.730 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:41.730 POWER: Cannot set governor of lcore 0 to userspace 00:05:41.730 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:41.730 POWER: Cannot set governor of lcore 0 to userspace 00:05:41.730 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:41.730 POWER: Unable to set Power Management Environment for lcore 0 00:05:41.730 [2024-12-14 04:54:52.475752] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:41.730 [2024-12-14 04:54:52.475773] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:41.730 [2024-12-14 04:54:52.475801] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:41.730 [2024-12-14 04:54:52.475825] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:41.730 [2024-12-14 04:54:52.475832] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:41.730 [2024-12-14 04:54:52.475841] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:41.730 04:54:52 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.730 04:54:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:41.730 04:54:52 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.730 04:54:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.730 [2024-12-14 04:54:52.546349] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:41.730 04:54:52 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.730 04:54:52 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:41.730 04:54:52 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.730 04:54:52 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.730 04:54:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.730 ************************************ 00:05:41.730 START TEST scheduler_create_thread 00:05:41.730 ************************************ 00:05:41.730 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:41.730 04:54:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:41.730 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.730 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.730 2 00:05:41.731 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.731 04:54:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:41.731 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.731 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.731 3 00:05:41.731 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.731 04:54:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:41.731 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.731 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.731 4 00:05:41.731 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.731 04:54:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:41.731 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.731 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.731 5 00:05:41.731 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.731 04:54:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:41.731 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.731 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.990 6 00:05:41.990 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.990 04:54:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:41.990 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.990 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.990 7 00:05:41.990 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.990 04:54:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:41.990 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.990 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.990 8 00:05:41.990 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.990 04:54:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:41.990 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.990 04:54:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.928 9 00:05:42.928 04:54:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.928 04:54:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:42.928 04:54:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.928 04:54:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.866 10 00:05:43.866 04:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.866 04:54:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:43.866 04:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.866 04:54:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.805 04:54:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.805 04:54:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:44.805 04:54:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:44.805 04:54:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.805 04:54:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.374 04:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.374 04:54:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:45.374 04:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.374 04:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.310 04:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.310 04:54:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:46.310 04:54:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:46.310 04:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.310 04:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.568 04:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.568 00:05:46.568 real 0m4.848s 00:05:46.568 user 0m0.026s 00:05:46.568 sys 0m0.009s 00:05:46.568 04:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.568 04:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.568 ************************************ 00:05:46.568 END TEST scheduler_create_thread 00:05:46.568 ************************************ 00:05:46.568 04:54:57 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:46.568 04:54:57 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70259 00:05:46.568 04:54:57 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70259 ']' 00:05:46.568 04:54:57 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70259 00:05:46.568 04:54:57 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:46.827 04:54:57 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.827 04:54:57 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70259 00:05:46.827 04:54:57 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:46.827 04:54:57 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:46.827 04:54:57 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70259' 00:05:46.827 killing process with pid 70259 00:05:46.827 04:54:57 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70259 00:05:46.827 04:54:57 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70259 00:05:46.827 [2024-12-14 04:54:57.682373] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:47.397 00:05:47.397 real 0m6.631s 00:05:47.397 user 0m14.809s 00:05:47.397 sys 0m0.437s 00:05:47.397 04:54:57 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.397 04:54:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.397 ************************************ 00:05:47.397 END TEST event_scheduler 00:05:47.397 ************************************ 00:05:47.397 04:54:58 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:47.397 04:54:58 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:47.397 04:54:58 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.397 04:54:58 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.397 04:54:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.397 ************************************ 00:05:47.397 START TEST app_repeat 00:05:47.397 ************************************ 00:05:47.397 04:54:58 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:47.397 04:54:58 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.397 04:54:58 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.397 04:54:58 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:47.397 04:54:58 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.397 04:54:58 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:47.397 04:54:58 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:47.397 04:54:58 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:47.397 04:54:58 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70387 00:05:47.397 04:54:58 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:47.397 04:54:58 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.397 Process app_repeat pid: 70387 00:05:47.397 04:54:58 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70387' 00:05:47.397 04:54:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.397 spdk_app_start Round 0 00:05:47.397 04:54:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:47.397 04:54:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70387 /var/tmp/spdk-nbd.sock 00:05:47.397 04:54:58 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70387 ']' 00:05:47.397 04:54:58 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.397 04:54:58 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.397 04:54:58 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.397 04:54:58 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.397 04:54:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.397 [2024-12-14 04:54:58.107105] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:47.397 [2024-12-14 04:54:58.107243] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70387 ] 00:05:47.397 [2024-12-14 04:54:58.270015] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.656 [2024-12-14 04:54:58.346728] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.656 [2024-12-14 04:54:58.346820] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.225 04:54:58 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.225 04:54:58 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:48.225 04:54:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.483 Malloc0 00:05:48.483 04:54:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.483 Malloc1 00:05:48.483 04:54:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.483 04:54:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.483 04:54:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.483 04:54:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.484 04:54:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.484 04:54:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.484 04:54:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.484 04:54:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.484 04:54:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.484 04:54:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.743 04:54:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.743 04:54:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.743 04:54:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.743 04:54:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.743 04:54:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.743 04:54:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.743 /dev/nbd0 00:05:48.743 04:54:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.743 04:54:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.743 04:54:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:48.743 04:54:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:48.743 04:54:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:48.743 04:54:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:48.743 04:54:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:48.743 04:54:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:48.743 04:54:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:48.743 04:54:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:48.743 04:54:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.743 1+0 records in 00:05:48.743 1+0 records out 00:05:48.743 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339688 s, 12.1 MB/s 00:05:48.743 04:54:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.743 04:54:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:48.743 04:54:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.743 04:54:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:48.743 04:54:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:48.743 04:54:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.744 04:54:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.744 04:54:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.004 /dev/nbd1 00:05:49.004 04:54:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.004 04:54:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.004 04:54:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:49.004 04:54:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:49.004 04:54:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:49.004 04:54:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:49.004 04:54:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:49.004 04:54:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:49.004 04:54:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:49.004 04:54:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:49.004 04:54:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.004 1+0 records in 00:05:49.004 1+0 records out 00:05:49.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298398 s, 13.7 MB/s 00:05:49.004 04:54:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.004 04:54:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:49.004 04:54:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.004 04:54:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:49.004 04:54:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:49.004 04:54:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.004 04:54:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.004 04:54:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.004 04:54:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.004 04:54:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.263 04:55:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.263 { 00:05:49.263 "nbd_device": "/dev/nbd0", 00:05:49.263 "bdev_name": "Malloc0" 00:05:49.263 }, 00:05:49.263 { 00:05:49.263 "nbd_device": "/dev/nbd1", 00:05:49.263 "bdev_name": "Malloc1" 00:05:49.263 } 00:05:49.263 ]' 00:05:49.263 04:55:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.263 { 00:05:49.263 "nbd_device": "/dev/nbd0", 00:05:49.263 "bdev_name": "Malloc0" 00:05:49.263 }, 00:05:49.263 { 00:05:49.263 "nbd_device": "/dev/nbd1", 00:05:49.263 "bdev_name": "Malloc1" 00:05:49.263 } 00:05:49.263 ]' 00:05:49.263 04:55:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.263 04:55:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.263 /dev/nbd1' 00:05:49.263 04:55:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.263 /dev/nbd1' 00:05:49.263 04:55:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.263 04:55:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.263 04:55:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.263 04:55:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.263 04:55:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.263 04:55:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.263 04:55:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.263 04:55:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.263 04:55:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.263 04:55:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.263 04:55:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.263 04:55:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.263 256+0 records in 00:05:49.263 256+0 records out 00:05:49.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00451888 s, 232 MB/s 00:05:49.263 04:55:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.263 04:55:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.263 256+0 records in 00:05:49.263 256+0 records out 00:05:49.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228672 s, 45.9 MB/s 00:05:49.263 04:55:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.263 04:55:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.523 256+0 records in 00:05:49.523 256+0 records out 00:05:49.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212632 s, 49.3 MB/s 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.523 04:55:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.783 04:55:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.783 04:55:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.783 04:55:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.783 04:55:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.783 04:55:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.783 04:55:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.783 04:55:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.783 04:55:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.783 04:55:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.783 04:55:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.783 04:55:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.042 04:55:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.042 04:55:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.042 04:55:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.042 04:55:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.042 04:55:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.042 04:55:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.042 04:55:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.042 04:55:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.042 04:55:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.042 04:55:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.042 04:55:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.042 04:55:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.042 04:55:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.301 04:55:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:50.301 [2024-12-14 04:55:01.170236] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.575 [2024-12-14 04:55:01.212961] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.575 [2024-12-14 04:55:01.212966] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.575 [2024-12-14 04:55:01.254339] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.575 [2024-12-14 04:55:01.254416] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.172 04:55:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.172 spdk_app_start Round 1 00:05:53.172 04:55:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:53.172 04:55:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70387 /var/tmp/spdk-nbd.sock 00:05:53.172 04:55:04 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70387 ']' 00:05:53.172 04:55:04 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.172 04:55:04 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.172 04:55:04 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.172 04:55:04 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.172 04:55:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.431 04:55:04 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.431 04:55:04 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:53.431 04:55:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.689 Malloc0 00:05:53.689 04:55:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.947 Malloc1 00:05:53.947 04:55:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.947 04:55:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.947 04:55:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.947 04:55:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.947 04:55:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.947 04:55:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.947 04:55:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.947 04:55:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.947 04:55:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.947 04:55:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.947 04:55:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.947 04:55:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.947 04:55:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:53.947 04:55:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.947 04:55:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.947 04:55:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.207 /dev/nbd0 00:05:54.207 04:55:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.207 04:55:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.207 04:55:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:54.207 04:55:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:54.207 04:55:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:54.207 04:55:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:54.207 04:55:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:54.207 04:55:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:54.207 04:55:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:54.207 04:55:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:54.207 04:55:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.207 1+0 records in 00:05:54.207 1+0 records out 00:05:54.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315115 s, 13.0 MB/s 00:05:54.207 04:55:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.207 04:55:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:54.207 04:55:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.207 04:55:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:54.207 04:55:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:54.207 04:55:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.207 04:55:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.207 04:55:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.207 /dev/nbd1 00:05:54.207 04:55:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.207 04:55:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.207 04:55:05 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:54.207 04:55:05 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:54.207 04:55:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:54.207 04:55:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:54.207 04:55:05 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:54.207 04:55:05 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:54.207 04:55:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:54.207 04:55:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:54.207 04:55:05 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.467 1+0 records in 00:05:54.467 1+0 records out 00:05:54.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345368 s, 11.9 MB/s 00:05:54.467 04:55:05 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.467 04:55:05 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:54.467 04:55:05 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.467 04:55:05 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:54.467 04:55:05 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:54.467 04:55:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.467 04:55:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.467 04:55:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.467 04:55:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.467 04:55:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.467 04:55:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.467 { 00:05:54.467 "nbd_device": "/dev/nbd0", 00:05:54.467 "bdev_name": "Malloc0" 00:05:54.467 }, 00:05:54.467 { 00:05:54.467 "nbd_device": "/dev/nbd1", 00:05:54.467 "bdev_name": "Malloc1" 00:05:54.467 } 00:05:54.467 ]' 00:05:54.467 04:55:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.467 04:55:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.467 { 00:05:54.467 "nbd_device": "/dev/nbd0", 00:05:54.467 "bdev_name": "Malloc0" 00:05:54.467 }, 00:05:54.467 { 00:05:54.467 "nbd_device": "/dev/nbd1", 00:05:54.467 "bdev_name": "Malloc1" 00:05:54.467 } 00:05:54.467 ]' 00:05:54.467 04:55:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.467 /dev/nbd1' 00:05:54.467 04:55:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.467 /dev/nbd1' 00:05:54.467 04:55:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.467 04:55:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.467 04:55:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.467 04:55:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.467 04:55:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.467 04:55:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.467 04:55:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.467 04:55:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.467 04:55:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.467 04:55:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.467 04:55:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.467 04:55:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.727 256+0 records in 00:05:54.727 256+0 records out 00:05:54.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127467 s, 82.3 MB/s 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.727 256+0 records in 00:05:54.727 256+0 records out 00:05:54.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160932 s, 65.2 MB/s 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.727 256+0 records in 00:05:54.727 256+0 records out 00:05:54.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242353 s, 43.3 MB/s 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.727 04:55:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:54.986 04:55:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:54.986 04:55:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:54.986 04:55:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:54.986 04:55:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.986 04:55:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.986 04:55:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:54.986 04:55:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.986 04:55:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.986 04:55:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.986 04:55:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:54.986 04:55:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:54.986 04:55:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:54.986 04:55:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:54.986 04:55:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.986 04:55:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.986 04:55:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:54.986 04:55:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.986 04:55:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.986 04:55:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.986 04:55:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.986 04:55:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.246 04:55:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.246 04:55:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.246 04:55:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.246 04:55:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.246 04:55:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.246 04:55:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.246 04:55:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.246 04:55:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.246 04:55:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.246 04:55:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.246 04:55:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.246 04:55:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.246 04:55:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:55.505 04:55:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:55.765 [2024-12-14 04:55:06.472393] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.765 [2024-12-14 04:55:06.513926] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.765 [2024-12-14 04:55:06.513961] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.765 [2024-12-14 04:55:06.555333] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:55.765 [2024-12-14 04:55:06.555411] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.054 04:55:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.054 spdk_app_start Round 2 00:05:59.054 04:55:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:59.054 04:55:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70387 /var/tmp/spdk-nbd.sock 00:05:59.054 04:55:09 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70387 ']' 00:05:59.054 04:55:09 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.054 04:55:09 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.054 04:55:09 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.054 04:55:09 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.054 04:55:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.054 04:55:09 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.054 04:55:09 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:59.054 04:55:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.054 Malloc0 00:05:59.054 04:55:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.054 Malloc1 00:05:59.054 04:55:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.054 04:55:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.054 04:55:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.054 04:55:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.054 04:55:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.054 04:55:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.054 04:55:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.054 04:55:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.054 04:55:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.055 04:55:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.055 04:55:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.055 04:55:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.055 04:55:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.055 04:55:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.055 04:55:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.055 04:55:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:59.314 /dev/nbd0 00:05:59.314 04:55:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.314 04:55:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.314 04:55:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:59.314 04:55:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:59.314 04:55:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:59.314 04:55:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:59.314 04:55:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:59.314 04:55:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:59.314 04:55:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:59.314 04:55:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:59.314 04:55:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.314 1+0 records in 00:05:59.314 1+0 records out 00:05:59.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390722 s, 10.5 MB/s 00:05:59.314 04:55:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.314 04:55:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:59.314 04:55:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.314 04:55:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:59.314 04:55:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:59.314 04:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.314 04:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.314 04:55:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:59.573 /dev/nbd1 00:05:59.573 04:55:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:59.573 04:55:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:59.573 04:55:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:59.573 04:55:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:59.573 04:55:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:59.573 04:55:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:59.573 04:55:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:59.573 04:55:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:59.573 04:55:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:59.573 04:55:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:59.573 04:55:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.573 1+0 records in 00:05:59.573 1+0 records out 00:05:59.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352605 s, 11.6 MB/s 00:05:59.573 04:55:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.573 04:55:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:59.573 04:55:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.573 04:55:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:59.573 04:55:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:59.573 04:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.573 04:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.573 04:55:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.573 04:55:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.573 04:55:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.831 04:55:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:59.831 { 00:05:59.831 "nbd_device": "/dev/nbd0", 00:05:59.831 "bdev_name": "Malloc0" 00:05:59.831 }, 00:05:59.831 { 00:05:59.831 "nbd_device": "/dev/nbd1", 00:05:59.831 "bdev_name": "Malloc1" 00:05:59.831 } 00:05:59.831 ]' 00:05:59.831 04:55:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:59.831 { 00:05:59.831 "nbd_device": "/dev/nbd0", 00:05:59.831 "bdev_name": "Malloc0" 00:05:59.831 }, 00:05:59.831 { 00:05:59.831 "nbd_device": "/dev/nbd1", 00:05:59.831 "bdev_name": "Malloc1" 00:05:59.831 } 00:05:59.831 ]' 00:05:59.831 04:55:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.831 04:55:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:59.831 /dev/nbd1' 00:05:59.831 04:55:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:59.831 /dev/nbd1' 00:05:59.831 04:55:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.831 04:55:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:59.831 04:55:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:59.831 04:55:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:59.831 04:55:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:59.831 04:55:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:59.831 04:55:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.831 04:55:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.831 04:55:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:59.831 04:55:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:59.831 04:55:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:59.831 04:55:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:59.831 256+0 records in 00:05:59.831 256+0 records out 00:05:59.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137913 s, 76.0 MB/s 00:05:59.831 04:55:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.831 04:55:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:59.831 256+0 records in 00:05:59.831 256+0 records out 00:05:59.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228871 s, 45.8 MB/s 00:05:59.831 04:55:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.831 04:55:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.090 256+0 records in 00:06:00.090 256+0 records out 00:06:00.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200391 s, 52.3 MB/s 00:06:00.090 04:55:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.090 04:55:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.090 04:55:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.090 04:55:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.090 04:55:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.090 04:55:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.090 04:55:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.090 04:55:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.090 04:55:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.090 04:55:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.090 04:55:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.090 04:55:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.090 04:55:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.091 04:55:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.091 04:55:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.091 04:55:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.091 04:55:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.091 04:55:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.091 04:55:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.091 04:55:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.091 04:55:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.091 04:55:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.091 04:55:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.091 04:55:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.091 04:55:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.091 04:55:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.091 04:55:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.091 04:55:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.091 04:55:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.349 04:55:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.349 04:55:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.349 04:55:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.349 04:55:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.349 04:55:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.349 04:55:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.349 04:55:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.349 04:55:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.349 04:55:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.349 04:55:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.349 04:55:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.609 04:55:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.609 04:55:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.609 04:55:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.609 04:55:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.609 04:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.609 04:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.609 04:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:00.609 04:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.609 04:55:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.609 04:55:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.609 04:55:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.609 04:55:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.609 04:55:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:00.868 04:55:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:01.127 [2024-12-14 04:55:11.762951] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.127 [2024-12-14 04:55:11.805179] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.127 [2024-12-14 04:55:11.805209] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.127 [2024-12-14 04:55:11.847960] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.127 [2024-12-14 04:55:11.848021] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.420 04:55:14 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70387 /var/tmp/spdk-nbd.sock 00:06:04.420 04:55:14 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70387 ']' 00:06:04.420 04:55:14 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.420 04:55:14 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.420 04:55:14 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.420 04:55:14 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.420 04:55:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.420 04:55:14 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.420 04:55:14 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:04.420 04:55:14 event.app_repeat -- event/event.sh@39 -- # killprocess 70387 00:06:04.420 04:55:14 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70387 ']' 00:06:04.420 04:55:14 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70387 00:06:04.420 04:55:14 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:04.420 04:55:14 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.420 04:55:14 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70387 00:06:04.420 killing process with pid 70387 00:06:04.420 04:55:14 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.420 04:55:14 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.420 04:55:14 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70387' 00:06:04.420 04:55:14 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70387 00:06:04.420 04:55:14 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70387 00:06:04.420 spdk_app_start is called in Round 0. 00:06:04.420 Shutdown signal received, stop current app iteration 00:06:04.420 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:04.420 spdk_app_start is called in Round 1. 00:06:04.420 Shutdown signal received, stop current app iteration 00:06:04.420 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:04.420 spdk_app_start is called in Round 2. 00:06:04.420 Shutdown signal received, stop current app iteration 00:06:04.420 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:04.420 spdk_app_start is called in Round 3. 00:06:04.420 Shutdown signal received, stop current app iteration 00:06:04.420 04:55:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:04.420 04:55:15 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:04.420 00:06:04.420 real 0m17.038s 00:06:04.420 user 0m37.151s 00:06:04.420 sys 0m2.617s 00:06:04.420 04:55:15 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.420 04:55:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.420 ************************************ 00:06:04.420 END TEST app_repeat 00:06:04.420 ************************************ 00:06:04.420 04:55:15 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:04.420 04:55:15 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:04.420 04:55:15 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.420 04:55:15 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.420 04:55:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.420 ************************************ 00:06:04.420 START TEST cpu_locks 00:06:04.420 ************************************ 00:06:04.420 04:55:15 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:04.420 * Looking for test storage... 00:06:04.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:04.420 04:55:15 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:04.420 04:55:15 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:04.420 04:55:15 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:04.681 04:55:15 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.681 04:55:15 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:04.681 04:55:15 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.681 04:55:15 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:04.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.681 --rc genhtml_branch_coverage=1 00:06:04.681 --rc genhtml_function_coverage=1 00:06:04.681 --rc genhtml_legend=1 00:06:04.681 --rc geninfo_all_blocks=1 00:06:04.681 --rc geninfo_unexecuted_blocks=1 00:06:04.681 00:06:04.681 ' 00:06:04.681 04:55:15 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:04.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.681 --rc genhtml_branch_coverage=1 00:06:04.681 --rc genhtml_function_coverage=1 00:06:04.681 --rc genhtml_legend=1 00:06:04.681 --rc geninfo_all_blocks=1 00:06:04.681 --rc geninfo_unexecuted_blocks=1 00:06:04.681 00:06:04.681 ' 00:06:04.681 04:55:15 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:04.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.681 --rc genhtml_branch_coverage=1 00:06:04.681 --rc genhtml_function_coverage=1 00:06:04.681 --rc genhtml_legend=1 00:06:04.681 --rc geninfo_all_blocks=1 00:06:04.681 --rc geninfo_unexecuted_blocks=1 00:06:04.681 00:06:04.681 ' 00:06:04.681 04:55:15 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:04.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.681 --rc genhtml_branch_coverage=1 00:06:04.681 --rc genhtml_function_coverage=1 00:06:04.681 --rc genhtml_legend=1 00:06:04.681 --rc geninfo_all_blocks=1 00:06:04.681 --rc geninfo_unexecuted_blocks=1 00:06:04.681 00:06:04.681 ' 00:06:04.681 04:55:15 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:04.681 04:55:15 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:04.681 04:55:15 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:04.681 04:55:15 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:04.681 04:55:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.681 04:55:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.681 04:55:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.681 ************************************ 00:06:04.681 START TEST default_locks 00:06:04.681 ************************************ 00:06:04.681 04:55:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:04.681 04:55:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70807 00:06:04.681 04:55:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.681 04:55:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70807 00:06:04.681 04:55:15 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70807 ']' 00:06:04.681 04:55:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.682 04:55:15 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.682 04:55:15 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.682 04:55:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.682 04:55:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.682 [2024-12-14 04:55:15.463459] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:04.682 [2024-12-14 04:55:15.463591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70807 ] 00:06:04.942 [2024-12-14 04:55:15.617682] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.942 [2024-12-14 04:55:15.661721] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.511 04:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.511 04:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:05.511 04:55:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70807 00:06:05.511 04:55:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70807 00:06:05.511 04:55:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.771 04:55:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70807 00:06:05.771 04:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 70807 ']' 00:06:05.771 04:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 70807 00:06:05.771 04:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:05.771 04:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.771 04:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70807 00:06:05.771 04:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.771 04:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.771 killing process with pid 70807 00:06:05.771 04:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70807' 00:06:05.771 04:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 70807 00:06:05.771 04:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 70807 00:06:06.340 04:55:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70807 00:06:06.340 04:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:06.340 04:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70807 00:06:06.340 04:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:06.340 04:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.340 04:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:06.340 04:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.340 04:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70807 00:06:06.340 04:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70807 ']' 00:06:06.340 04:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.340 04:55:17 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.340 04:55:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.340 04:55:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.340 04:55:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.340 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70807) - No such process 00:06:06.340 ERROR: process (pid: 70807) is no longer running 00:06:06.340 04:55:17 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.340 04:55:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:06.340 04:55:17 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:06.340 04:55:17 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:06.340 04:55:17 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:06.340 04:55:17 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:06.340 04:55:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:06.340 04:55:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:06.341 04:55:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:06.341 04:55:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:06.341 00:06:06.341 real 0m1.634s 00:06:06.341 user 0m1.583s 00:06:06.341 sys 0m0.563s 00:06:06.341 04:55:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.341 04:55:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.341 ************************************ 00:06:06.341 END TEST default_locks 00:06:06.341 ************************************ 00:06:06.341 04:55:17 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:06.341 04:55:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.341 04:55:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.341 04:55:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.341 ************************************ 00:06:06.341 START TEST default_locks_via_rpc 00:06:06.341 ************************************ 00:06:06.341 04:55:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:06.341 04:55:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70854 00:06:06.341 04:55:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.341 04:55:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70854 00:06:06.341 04:55:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70854 ']' 00:06:06.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.341 04:55:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.341 04:55:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.341 04:55:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.341 04:55:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.341 04:55:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.341 [2024-12-14 04:55:17.168450] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:06.341 [2024-12-14 04:55:17.168584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70854 ] 00:06:06.601 [2024-12-14 04:55:17.321078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.601 [2024-12-14 04:55:17.365907] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.171 04:55:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.171 04:55:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:07.171 04:55:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:07.171 04:55:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.171 04:55:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.171 04:55:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.171 04:55:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:07.171 04:55:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:07.171 04:55:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:07.171 04:55:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:07.171 04:55:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:07.171 04:55:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.171 04:55:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.171 04:55:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.171 04:55:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70854 00:06:07.171 04:55:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70854 00:06:07.171 04:55:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.430 04:55:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70854 00:06:07.430 04:55:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 70854 ']' 00:06:07.430 04:55:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 70854 00:06:07.430 04:55:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:07.430 04:55:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.430 04:55:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70854 00:06:07.430 killing process with pid 70854 00:06:07.430 04:55:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.430 04:55:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.430 04:55:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70854' 00:06:07.430 04:55:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 70854 00:06:07.430 04:55:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 70854 00:06:08.008 00:06:08.008 real 0m1.610s 00:06:08.008 user 0m1.565s 00:06:08.008 sys 0m0.559s 00:06:08.008 04:55:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.008 04:55:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.008 ************************************ 00:06:08.008 END TEST default_locks_via_rpc 00:06:08.008 ************************************ 00:06:08.008 04:55:18 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:08.008 04:55:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.008 04:55:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.008 04:55:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.008 ************************************ 00:06:08.008 START TEST non_locking_app_on_locked_coremask 00:06:08.008 ************************************ 00:06:08.008 04:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:08.008 04:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70901 00:06:08.008 04:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.008 04:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70901 /var/tmp/spdk.sock 00:06:08.008 04:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70901 ']' 00:06:08.008 04:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.008 04:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.008 04:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.008 04:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.008 04:55:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.008 [2024-12-14 04:55:18.849229] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:08.008 [2024-12-14 04:55:18.849355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70901 ] 00:06:08.290 [2024-12-14 04:55:19.008685] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.290 [2024-12-14 04:55:19.055096] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.874 04:55:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.874 04:55:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:08.874 04:55:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70917 00:06:08.874 04:55:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:08.874 04:55:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70917 /var/tmp/spdk2.sock 00:06:08.874 04:55:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70917 ']' 00:06:08.874 04:55:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.874 04:55:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.874 04:55:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.874 04:55:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.874 04:55:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.874 [2024-12-14 04:55:19.741688] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:08.874 [2024-12-14 04:55:19.741901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70917 ] 00:06:09.133 [2024-12-14 04:55:19.888923] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.133 [2024-12-14 04:55:19.888985] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.133 [2024-12-14 04:55:19.982717] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.703 04:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.703 04:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:09.703 04:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70901 00:06:09.703 04:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70901 00:06:09.703 04:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.641 04:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70901 00:06:10.641 04:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70901 ']' 00:06:10.641 04:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70901 00:06:10.641 04:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:10.641 04:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.641 04:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70901 00:06:10.641 04:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.641 04:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.641 killing process with pid 70901 00:06:10.641 04:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70901' 00:06:10.641 04:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70901 00:06:10.641 04:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70901 00:06:11.581 04:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70917 00:06:11.581 04:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70917 ']' 00:06:11.581 04:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70917 00:06:11.581 04:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:11.581 04:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.581 04:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70917 00:06:11.581 killing process with pid 70917 00:06:11.581 04:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.581 04:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.581 04:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70917' 00:06:11.581 04:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70917 00:06:11.581 04:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70917 00:06:11.841 00:06:11.841 real 0m3.944s 00:06:11.841 user 0m4.096s 00:06:11.841 sys 0m1.264s 00:06:11.841 04:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.841 ************************************ 00:06:11.841 END TEST non_locking_app_on_locked_coremask 00:06:11.841 ************************************ 00:06:11.841 04:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.101 04:55:22 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:12.101 04:55:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.101 04:55:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.102 04:55:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.102 ************************************ 00:06:12.102 START TEST locking_app_on_unlocked_coremask 00:06:12.102 ************************************ 00:06:12.102 04:55:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:12.102 04:55:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70988 00:06:12.102 04:55:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:12.102 04:55:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70988 /var/tmp/spdk.sock 00:06:12.102 04:55:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70988 ']' 00:06:12.102 04:55:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.102 04:55:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.102 04:55:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.102 04:55:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.102 04:55:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.102 [2024-12-14 04:55:22.860611] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:12.102 [2024-12-14 04:55:22.860829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70988 ] 00:06:12.362 [2024-12-14 04:55:23.018782] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.362 [2024-12-14 04:55:23.018950] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.362 [2024-12-14 04:55:23.063125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.931 04:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.931 04:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:12.931 04:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71004 00:06:12.932 04:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:12.932 04:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71004 /var/tmp/spdk2.sock 00:06:12.932 04:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71004 ']' 00:06:12.932 04:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.932 04:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.932 04:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.932 04:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.932 04:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.932 [2024-12-14 04:55:23.757270] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:12.932 [2024-12-14 04:55:23.757505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71004 ] 00:06:13.191 [2024-12-14 04:55:23.903212] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.191 [2024-12-14 04:55:23.994505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.759 04:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.759 04:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:13.759 04:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71004 00:06:13.759 04:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71004 00:06:13.759 04:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.330 04:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70988 00:06:14.330 04:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70988 ']' 00:06:14.330 04:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70988 00:06:14.330 04:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:14.330 04:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.330 04:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70988 00:06:14.330 killing process with pid 70988 00:06:14.330 04:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.330 04:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.330 04:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70988' 00:06:14.330 04:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70988 00:06:14.330 04:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70988 00:06:15.271 04:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71004 00:06:15.271 04:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71004 ']' 00:06:15.271 04:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71004 00:06:15.271 04:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:15.271 04:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.271 04:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71004 00:06:15.271 killing process with pid 71004 00:06:15.271 04:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.271 04:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.271 04:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71004' 00:06:15.271 04:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71004 00:06:15.271 04:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71004 00:06:15.531 00:06:15.531 real 0m3.562s 00:06:15.531 user 0m3.714s 00:06:15.531 sys 0m1.094s 00:06:15.531 04:55:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.531 04:55:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.531 ************************************ 00:06:15.531 END TEST locking_app_on_unlocked_coremask 00:06:15.531 ************************************ 00:06:15.531 04:55:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:15.531 04:55:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.531 04:55:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.531 04:55:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.531 ************************************ 00:06:15.531 START TEST locking_app_on_locked_coremask 00:06:15.531 ************************************ 00:06:15.531 04:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:15.531 04:55:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71066 00:06:15.531 04:55:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.531 04:55:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71066 /var/tmp/spdk.sock 00:06:15.531 04:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71066 ']' 00:06:15.531 04:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.531 04:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.531 04:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.531 04:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.531 04:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.791 [2024-12-14 04:55:26.490650] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:15.791 [2024-12-14 04:55:26.490870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71066 ] 00:06:15.791 [2024-12-14 04:55:26.650575] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.051 [2024-12-14 04:55:26.696749] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.621 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.621 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:16.621 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71078 00:06:16.621 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:16.621 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71078 /var/tmp/spdk2.sock 00:06:16.621 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:16.621 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71078 /var/tmp/spdk2.sock 00:06:16.621 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:16.621 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.621 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:16.621 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.621 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71078 /var/tmp/spdk2.sock 00:06:16.621 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71078 ']' 00:06:16.621 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.621 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.621 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.621 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.621 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.621 [2024-12-14 04:55:27.383930] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:16.621 [2024-12-14 04:55:27.384143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71078 ] 00:06:16.881 [2024-12-14 04:55:27.534143] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71066 has claimed it. 00:06:16.881 [2024-12-14 04:55:27.534229] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:17.140 ERROR: process (pid: 71078) is no longer running 00:06:17.140 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71078) - No such process 00:06:17.140 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.140 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:17.140 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:17.140 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.140 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:17.140 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.140 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71066 00:06:17.140 04:55:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71066 00:06:17.140 04:55:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.708 04:55:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71066 00:06:17.708 04:55:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71066 ']' 00:06:17.708 04:55:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71066 00:06:17.708 04:55:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:17.708 04:55:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.708 04:55:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71066 00:06:17.708 killing process with pid 71066 00:06:17.708 04:55:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.708 04:55:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.708 04:55:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71066' 00:06:17.708 04:55:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71066 00:06:17.708 04:55:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71066 00:06:18.278 00:06:18.278 real 0m2.515s 00:06:18.278 user 0m2.682s 00:06:18.278 sys 0m0.783s 00:06:18.278 04:55:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.278 04:55:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.278 ************************************ 00:06:18.278 END TEST locking_app_on_locked_coremask 00:06:18.278 ************************************ 00:06:18.278 04:55:28 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:18.278 04:55:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.278 04:55:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.278 04:55:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.278 ************************************ 00:06:18.278 START TEST locking_overlapped_coremask 00:06:18.278 ************************************ 00:06:18.278 04:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:18.278 04:55:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71131 00:06:18.278 04:55:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:18.278 04:55:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71131 /var/tmp/spdk.sock 00:06:18.278 04:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71131 ']' 00:06:18.278 04:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.278 04:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.278 04:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.278 04:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.278 04:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.278 [2024-12-14 04:55:29.079088] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:18.278 [2024-12-14 04:55:29.079316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71131 ] 00:06:18.538 [2024-12-14 04:55:29.230014] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.538 [2024-12-14 04:55:29.275941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.538 [2024-12-14 04:55:29.275898] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.538 [2024-12-14 04:55:29.276055] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.112 04:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.112 04:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:19.112 04:55:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71149 00:06:19.112 04:55:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:19.112 04:55:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71149 /var/tmp/spdk2.sock 00:06:19.112 04:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:19.112 04:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71149 /var/tmp/spdk2.sock 00:06:19.112 04:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:19.112 04:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.112 04:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:19.112 04:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.112 04:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71149 /var/tmp/spdk2.sock 00:06:19.112 04:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71149 ']' 00:06:19.112 04:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.112 04:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.112 04:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.112 04:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.112 04:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.112 [2024-12-14 04:55:29.974092] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:19.112 [2024-12-14 04:55:29.974304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71149 ] 00:06:19.371 [2024-12-14 04:55:30.124906] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71131 has claimed it. 00:06:19.371 [2024-12-14 04:55:30.124967] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:19.941 ERROR: process (pid: 71149) is no longer running 00:06:19.941 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71149) - No such process 00:06:19.941 04:55:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.941 04:55:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:19.941 04:55:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:19.941 04:55:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.941 04:55:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:19.941 04:55:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.941 04:55:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:19.941 04:55:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:19.941 04:55:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:19.941 04:55:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:19.941 04:55:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71131 00:06:19.941 04:55:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71131 ']' 00:06:19.941 04:55:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71131 00:06:19.941 04:55:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:19.941 04:55:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.941 04:55:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71131 00:06:19.941 killing process with pid 71131 00:06:19.941 04:55:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.941 04:55:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.941 04:55:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71131' 00:06:19.941 04:55:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71131 00:06:19.941 04:55:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71131 00:06:20.201 00:06:20.201 real 0m2.032s 00:06:20.201 user 0m5.341s 00:06:20.201 sys 0m0.532s 00:06:20.201 04:55:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.201 ************************************ 00:06:20.201 END TEST locking_overlapped_coremask 00:06:20.201 ************************************ 00:06:20.201 04:55:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.201 04:55:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:20.201 04:55:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.201 04:55:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.201 04:55:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.461 ************************************ 00:06:20.461 START TEST locking_overlapped_coremask_via_rpc 00:06:20.461 ************************************ 00:06:20.461 04:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:20.461 04:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71191 00:06:20.461 04:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:20.461 04:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71191 /var/tmp/spdk.sock 00:06:20.461 04:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71191 ']' 00:06:20.461 04:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.461 04:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.461 04:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.461 04:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.461 04:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.461 [2024-12-14 04:55:31.179927] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:20.461 [2024-12-14 04:55:31.180048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71191 ] 00:06:20.461 [2024-12-14 04:55:31.338505] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.461 [2024-12-14 04:55:31.338598] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.721 [2024-12-14 04:55:31.384919] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.721 [2024-12-14 04:55:31.385009] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.721 [2024-12-14 04:55:31.385141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.289 04:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.289 04:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:21.289 04:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71209 00:06:21.289 04:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71209 /var/tmp/spdk2.sock 00:06:21.289 04:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:21.289 04:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71209 ']' 00:06:21.289 04:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.289 04:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.289 04:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.289 04:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.289 04:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.289 [2024-12-14 04:55:32.078543] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:21.289 [2024-12-14 04:55:32.078745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71209 ] 00:06:21.549 [2024-12-14 04:55:32.231943] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.549 [2024-12-14 04:55:32.231996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.549 [2024-12-14 04:55:32.332778] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.549 [2024-12-14 04:55:32.332851] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.549 [2024-12-14 04:55:32.332980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.143 [2024-12-14 04:55:32.921347] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71191 has claimed it. 00:06:22.143 request: 00:06:22.143 { 00:06:22.143 "method": "framework_enable_cpumask_locks", 00:06:22.143 "req_id": 1 00:06:22.143 } 00:06:22.143 Got JSON-RPC error response 00:06:22.143 response: 00:06:22.143 { 00:06:22.143 "code": -32603, 00:06:22.143 "message": "Failed to claim CPU core: 2" 00:06:22.143 } 00:06:22.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71191 /var/tmp/spdk.sock 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71191 ']' 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.143 04:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.401 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.401 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:22.401 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71209 /var/tmp/spdk2.sock 00:06:22.401 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71209 ']' 00:06:22.401 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.401 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.401 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.401 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.401 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.660 ************************************ 00:06:22.660 END TEST locking_overlapped_coremask_via_rpc 00:06:22.660 ************************************ 00:06:22.661 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.661 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:22.661 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:22.661 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:22.661 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:22.661 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:22.661 00:06:22.661 real 0m2.242s 00:06:22.661 user 0m1.030s 00:06:22.661 sys 0m0.146s 00:06:22.661 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.661 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.661 04:55:33 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:22.661 04:55:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71191 ]] 00:06:22.661 04:55:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71191 00:06:22.661 04:55:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71191 ']' 00:06:22.661 04:55:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71191 00:06:22.661 04:55:33 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:22.661 04:55:33 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.661 04:55:33 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71191 00:06:22.661 killing process with pid 71191 00:06:22.661 04:55:33 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.661 04:55:33 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.661 04:55:33 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71191' 00:06:22.661 04:55:33 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71191 00:06:22.661 04:55:33 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71191 00:06:22.919 04:55:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71209 ]] 00:06:22.919 04:55:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71209 00:06:22.919 04:55:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71209 ']' 00:06:22.919 04:55:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71209 00:06:22.919 04:55:33 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:23.178 04:55:33 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.178 04:55:33 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71209 00:06:23.178 killing process with pid 71209 00:06:23.178 04:55:33 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:23.178 04:55:33 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:23.178 04:55:33 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71209' 00:06:23.178 04:55:33 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71209 00:06:23.178 04:55:33 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71209 00:06:23.438 04:55:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:23.438 Process with pid 71191 is not found 00:06:23.438 04:55:34 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:23.438 04:55:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71191 ]] 00:06:23.438 04:55:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71191 00:06:23.438 04:55:34 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71191 ']' 00:06:23.438 04:55:34 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71191 00:06:23.438 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71191) - No such process 00:06:23.438 04:55:34 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71191 is not found' 00:06:23.438 04:55:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71209 ]] 00:06:23.438 04:55:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71209 00:06:23.438 04:55:34 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71209 ']' 00:06:23.438 04:55:34 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71209 00:06:23.438 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71209) - No such process 00:06:23.438 Process with pid 71209 is not found 00:06:23.438 04:55:34 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71209 is not found' 00:06:23.438 04:55:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:23.438 00:06:23.438 real 0m19.082s 00:06:23.438 user 0m30.996s 00:06:23.438 sys 0m6.002s 00:06:23.438 04:55:34 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.438 ************************************ 00:06:23.438 END TEST cpu_locks 00:06:23.438 ************************************ 00:06:23.438 04:55:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.438 ************************************ 00:06:23.438 END TEST event 00:06:23.438 ************************************ 00:06:23.438 00:06:23.438 real 0m47.608s 00:06:23.438 user 1m29.722s 00:06:23.438 sys 0m9.826s 00:06:23.438 04:55:34 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.438 04:55:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.697 04:55:34 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:23.697 04:55:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.697 04:55:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.697 04:55:34 -- common/autotest_common.sh@10 -- # set +x 00:06:23.697 ************************************ 00:06:23.697 START TEST thread 00:06:23.697 ************************************ 00:06:23.697 04:55:34 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:23.697 * Looking for test storage... 00:06:23.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:23.697 04:55:34 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:23.697 04:55:34 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:23.697 04:55:34 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:23.697 04:55:34 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:23.697 04:55:34 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.697 04:55:34 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.697 04:55:34 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.697 04:55:34 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.697 04:55:34 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.697 04:55:34 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.697 04:55:34 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.697 04:55:34 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.697 04:55:34 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.697 04:55:34 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.697 04:55:34 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.697 04:55:34 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:23.697 04:55:34 thread -- scripts/common.sh@345 -- # : 1 00:06:23.697 04:55:34 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.697 04:55:34 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.697 04:55:34 thread -- scripts/common.sh@365 -- # decimal 1 00:06:23.697 04:55:34 thread -- scripts/common.sh@353 -- # local d=1 00:06:23.697 04:55:34 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.697 04:55:34 thread -- scripts/common.sh@355 -- # echo 1 00:06:23.697 04:55:34 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.697 04:55:34 thread -- scripts/common.sh@366 -- # decimal 2 00:06:23.697 04:55:34 thread -- scripts/common.sh@353 -- # local d=2 00:06:23.697 04:55:34 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.697 04:55:34 thread -- scripts/common.sh@355 -- # echo 2 00:06:23.697 04:55:34 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.697 04:55:34 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.697 04:55:34 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.697 04:55:34 thread -- scripts/common.sh@368 -- # return 0 00:06:23.697 04:55:34 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.697 04:55:34 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:23.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.697 --rc genhtml_branch_coverage=1 00:06:23.697 --rc genhtml_function_coverage=1 00:06:23.698 --rc genhtml_legend=1 00:06:23.698 --rc geninfo_all_blocks=1 00:06:23.698 --rc geninfo_unexecuted_blocks=1 00:06:23.698 00:06:23.698 ' 00:06:23.698 04:55:34 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:23.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.698 --rc genhtml_branch_coverage=1 00:06:23.698 --rc genhtml_function_coverage=1 00:06:23.698 --rc genhtml_legend=1 00:06:23.698 --rc geninfo_all_blocks=1 00:06:23.698 --rc geninfo_unexecuted_blocks=1 00:06:23.698 00:06:23.698 ' 00:06:23.698 04:55:34 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:23.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.698 --rc genhtml_branch_coverage=1 00:06:23.698 --rc genhtml_function_coverage=1 00:06:23.698 --rc genhtml_legend=1 00:06:23.698 --rc geninfo_all_blocks=1 00:06:23.698 --rc geninfo_unexecuted_blocks=1 00:06:23.698 00:06:23.698 ' 00:06:23.698 04:55:34 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:23.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.698 --rc genhtml_branch_coverage=1 00:06:23.698 --rc genhtml_function_coverage=1 00:06:23.698 --rc genhtml_legend=1 00:06:23.698 --rc geninfo_all_blocks=1 00:06:23.698 --rc geninfo_unexecuted_blocks=1 00:06:23.698 00:06:23.698 ' 00:06:23.698 04:55:34 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:23.698 04:55:34 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:23.698 04:55:34 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.698 04:55:34 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.957 ************************************ 00:06:23.957 START TEST thread_poller_perf 00:06:23.957 ************************************ 00:06:23.957 04:55:34 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:23.957 [2024-12-14 04:55:34.625015] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:23.957 [2024-12-14 04:55:34.625148] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71336 ] 00:06:23.957 [2024-12-14 04:55:34.784439] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.957 [2024-12-14 04:55:34.830685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.957 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:25.338 [2024-12-14T04:55:36.221Z] ====================================== 00:06:25.338 [2024-12-14T04:55:36.221Z] busy:2296683268 (cyc) 00:06:25.338 [2024-12-14T04:55:36.221Z] total_run_count: 428000 00:06:25.338 [2024-12-14T04:55:36.221Z] tsc_hz: 2290000000 (cyc) 00:06:25.338 [2024-12-14T04:55:36.221Z] ====================================== 00:06:25.338 [2024-12-14T04:55:36.221Z] poller_cost: 5366 (cyc), 2343 (nsec) 00:06:25.338 ************************************ 00:06:25.338 END TEST thread_poller_perf 00:06:25.338 ************************************ 00:06:25.338 00:06:25.338 real 0m1.338s 00:06:25.338 user 0m1.145s 00:06:25.338 sys 0m0.088s 00:06:25.338 04:55:35 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.338 04:55:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:25.338 04:55:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:25.338 04:55:35 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:25.338 04:55:35 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.338 04:55:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.338 ************************************ 00:06:25.338 START TEST thread_poller_perf 00:06:25.338 ************************************ 00:06:25.338 04:55:35 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:25.338 [2024-12-14 04:55:36.028113] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:25.338 [2024-12-14 04:55:36.028304] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71378 ] 00:06:25.338 [2024-12-14 04:55:36.186812] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.598 [2024-12-14 04:55:36.232310] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.598 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:26.536 [2024-12-14T04:55:37.419Z] ====================================== 00:06:26.536 [2024-12-14T04:55:37.419Z] busy:2293217182 (cyc) 00:06:26.536 [2024-12-14T04:55:37.419Z] total_run_count: 5614000 00:06:26.536 [2024-12-14T04:55:37.419Z] tsc_hz: 2290000000 (cyc) 00:06:26.536 [2024-12-14T04:55:37.419Z] ====================================== 00:06:26.536 [2024-12-14T04:55:37.419Z] poller_cost: 408 (cyc), 178 (nsec) 00:06:26.536 00:06:26.536 real 0m1.342s 00:06:26.536 user 0m1.146s 00:06:26.536 sys 0m0.089s 00:06:26.536 04:55:37 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.536 04:55:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.536 ************************************ 00:06:26.536 END TEST thread_poller_perf 00:06:26.536 ************************************ 00:06:26.536 04:55:37 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:26.536 ************************************ 00:06:26.536 END TEST thread 00:06:26.537 ************************************ 00:06:26.537 00:06:26.537 real 0m3.041s 00:06:26.537 user 0m2.449s 00:06:26.537 sys 0m0.384s 00:06:26.537 04:55:37 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.537 04:55:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.797 04:55:37 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:26.797 04:55:37 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:26.797 04:55:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.797 04:55:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.797 04:55:37 -- common/autotest_common.sh@10 -- # set +x 00:06:26.797 ************************************ 00:06:26.797 START TEST app_cmdline 00:06:26.797 ************************************ 00:06:26.797 04:55:37 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:26.797 * Looking for test storage... 00:06:26.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:26.797 04:55:37 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:26.797 04:55:37 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:26.797 04:55:37 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:26.797 04:55:37 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.797 04:55:37 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:26.797 04:55:37 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.797 04:55:37 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:26.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.797 --rc genhtml_branch_coverage=1 00:06:26.797 --rc genhtml_function_coverage=1 00:06:26.797 --rc genhtml_legend=1 00:06:26.797 --rc geninfo_all_blocks=1 00:06:26.797 --rc geninfo_unexecuted_blocks=1 00:06:26.797 00:06:26.797 ' 00:06:26.797 04:55:37 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:26.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.797 --rc genhtml_branch_coverage=1 00:06:26.797 --rc genhtml_function_coverage=1 00:06:26.797 --rc genhtml_legend=1 00:06:26.797 --rc geninfo_all_blocks=1 00:06:26.797 --rc geninfo_unexecuted_blocks=1 00:06:26.797 00:06:26.797 ' 00:06:26.797 04:55:37 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:26.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.797 --rc genhtml_branch_coverage=1 00:06:26.797 --rc genhtml_function_coverage=1 00:06:26.797 --rc genhtml_legend=1 00:06:26.797 --rc geninfo_all_blocks=1 00:06:26.797 --rc geninfo_unexecuted_blocks=1 00:06:26.797 00:06:26.797 ' 00:06:26.797 04:55:37 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:26.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.797 --rc genhtml_branch_coverage=1 00:06:26.797 --rc genhtml_function_coverage=1 00:06:26.797 --rc genhtml_legend=1 00:06:26.797 --rc geninfo_all_blocks=1 00:06:26.797 --rc geninfo_unexecuted_blocks=1 00:06:26.797 00:06:26.797 ' 00:06:26.797 04:55:37 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:26.797 04:55:37 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71456 00:06:26.797 04:55:37 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:26.797 04:55:37 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71456 00:06:26.797 04:55:37 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71456 ']' 00:06:26.797 04:55:37 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.797 04:55:37 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.797 04:55:37 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.797 04:55:37 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.797 04:55:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.057 [2024-12-14 04:55:37.765543] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:27.057 [2024-12-14 04:55:37.765755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71456 ] 00:06:27.057 [2024-12-14 04:55:37.924437] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.318 [2024-12-14 04:55:37.971984] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.888 04:55:38 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.888 04:55:38 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:27.888 04:55:38 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:27.888 { 00:06:27.888 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:06:27.888 "fields": { 00:06:27.888 "major": 24, 00:06:27.888 "minor": 9, 00:06:27.888 "patch": 1, 00:06:27.888 "suffix": "-pre", 00:06:27.888 "commit": "b18e1bd62" 00:06:27.888 } 00:06:27.888 } 00:06:27.888 04:55:38 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:27.888 04:55:38 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:27.888 04:55:38 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:27.888 04:55:38 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:27.888 04:55:38 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:27.888 04:55:38 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:27.888 04:55:38 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:27.888 04:55:38 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.888 04:55:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.888 04:55:38 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.888 04:55:38 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:27.888 04:55:38 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:27.888 04:55:38 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.888 04:55:38 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:27.888 04:55:38 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.888 04:55:38 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.148 04:55:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.148 04:55:38 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.148 04:55:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.148 04:55:38 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.148 04:55:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.148 04:55:38 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.148 04:55:38 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:28.148 04:55:38 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.148 request: 00:06:28.148 { 00:06:28.148 "method": "env_dpdk_get_mem_stats", 00:06:28.148 "req_id": 1 00:06:28.148 } 00:06:28.148 Got JSON-RPC error response 00:06:28.148 response: 00:06:28.148 { 00:06:28.148 "code": -32601, 00:06:28.148 "message": "Method not found" 00:06:28.148 } 00:06:28.148 04:55:38 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:28.148 04:55:38 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:28.148 04:55:38 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:28.148 04:55:38 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:28.148 04:55:38 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71456 00:06:28.148 04:55:38 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71456 ']' 00:06:28.148 04:55:38 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71456 00:06:28.148 04:55:38 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:28.148 04:55:38 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.148 04:55:38 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71456 00:06:28.148 killing process with pid 71456 00:06:28.148 04:55:39 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:28.148 04:55:39 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:28.148 04:55:39 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71456' 00:06:28.148 04:55:39 app_cmdline -- common/autotest_common.sh@969 -- # kill 71456 00:06:28.148 04:55:39 app_cmdline -- common/autotest_common.sh@974 -- # wait 71456 00:06:28.718 ************************************ 00:06:28.719 END TEST app_cmdline 00:06:28.719 ************************************ 00:06:28.719 00:06:28.719 real 0m1.940s 00:06:28.719 user 0m2.113s 00:06:28.719 sys 0m0.543s 00:06:28.719 04:55:39 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.719 04:55:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.719 04:55:39 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:28.719 04:55:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.719 04:55:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.719 04:55:39 -- common/autotest_common.sh@10 -- # set +x 00:06:28.719 ************************************ 00:06:28.719 START TEST version 00:06:28.719 ************************************ 00:06:28.719 04:55:39 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:28.719 * Looking for test storage... 00:06:28.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:28.719 04:55:39 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:28.719 04:55:39 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:28.719 04:55:39 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:28.979 04:55:39 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:28.979 04:55:39 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.979 04:55:39 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.979 04:55:39 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.979 04:55:39 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.979 04:55:39 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.979 04:55:39 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.979 04:55:39 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.979 04:55:39 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.979 04:55:39 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.979 04:55:39 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.979 04:55:39 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.979 04:55:39 version -- scripts/common.sh@344 -- # case "$op" in 00:06:28.979 04:55:39 version -- scripts/common.sh@345 -- # : 1 00:06:28.979 04:55:39 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.979 04:55:39 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.979 04:55:39 version -- scripts/common.sh@365 -- # decimal 1 00:06:28.979 04:55:39 version -- scripts/common.sh@353 -- # local d=1 00:06:28.979 04:55:39 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.979 04:55:39 version -- scripts/common.sh@355 -- # echo 1 00:06:28.979 04:55:39 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.979 04:55:39 version -- scripts/common.sh@366 -- # decimal 2 00:06:28.979 04:55:39 version -- scripts/common.sh@353 -- # local d=2 00:06:28.979 04:55:39 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.979 04:55:39 version -- scripts/common.sh@355 -- # echo 2 00:06:28.979 04:55:39 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.979 04:55:39 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.979 04:55:39 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.979 04:55:39 version -- scripts/common.sh@368 -- # return 0 00:06:28.979 04:55:39 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.979 04:55:39 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:28.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.979 --rc genhtml_branch_coverage=1 00:06:28.979 --rc genhtml_function_coverage=1 00:06:28.979 --rc genhtml_legend=1 00:06:28.979 --rc geninfo_all_blocks=1 00:06:28.979 --rc geninfo_unexecuted_blocks=1 00:06:28.979 00:06:28.979 ' 00:06:28.979 04:55:39 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:28.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.979 --rc genhtml_branch_coverage=1 00:06:28.979 --rc genhtml_function_coverage=1 00:06:28.979 --rc genhtml_legend=1 00:06:28.979 --rc geninfo_all_blocks=1 00:06:28.979 --rc geninfo_unexecuted_blocks=1 00:06:28.979 00:06:28.979 ' 00:06:28.979 04:55:39 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:28.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.979 --rc genhtml_branch_coverage=1 00:06:28.979 --rc genhtml_function_coverage=1 00:06:28.979 --rc genhtml_legend=1 00:06:28.979 --rc geninfo_all_blocks=1 00:06:28.979 --rc geninfo_unexecuted_blocks=1 00:06:28.979 00:06:28.979 ' 00:06:28.979 04:55:39 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:28.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.979 --rc genhtml_branch_coverage=1 00:06:28.979 --rc genhtml_function_coverage=1 00:06:28.979 --rc genhtml_legend=1 00:06:28.979 --rc geninfo_all_blocks=1 00:06:28.979 --rc geninfo_unexecuted_blocks=1 00:06:28.979 00:06:28.979 ' 00:06:28.979 04:55:39 version -- app/version.sh@17 -- # get_header_version major 00:06:28.979 04:55:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.979 04:55:39 version -- app/version.sh@14 -- # cut -f2 00:06:28.979 04:55:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.979 04:55:39 version -- app/version.sh@17 -- # major=24 00:06:28.979 04:55:39 version -- app/version.sh@18 -- # get_header_version minor 00:06:28.979 04:55:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.979 04:55:39 version -- app/version.sh@14 -- # cut -f2 00:06:28.979 04:55:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.979 04:55:39 version -- app/version.sh@18 -- # minor=9 00:06:28.979 04:55:39 version -- app/version.sh@19 -- # get_header_version patch 00:06:28.979 04:55:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.979 04:55:39 version -- app/version.sh@14 -- # cut -f2 00:06:28.979 04:55:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.979 04:55:39 version -- app/version.sh@19 -- # patch=1 00:06:28.979 04:55:39 version -- app/version.sh@20 -- # get_header_version suffix 00:06:28.979 04:55:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.979 04:55:39 version -- app/version.sh@14 -- # cut -f2 00:06:28.979 04:55:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.979 04:55:39 version -- app/version.sh@20 -- # suffix=-pre 00:06:28.979 04:55:39 version -- app/version.sh@22 -- # version=24.9 00:06:28.979 04:55:39 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:28.979 04:55:39 version -- app/version.sh@25 -- # version=24.9.1 00:06:28.979 04:55:39 version -- app/version.sh@28 -- # version=24.9.1rc0 00:06:28.980 04:55:39 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:28.980 04:55:39 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:28.980 04:55:39 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:06:28.980 04:55:39 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:06:28.980 00:06:28.980 real 0m0.309s 00:06:28.980 user 0m0.190s 00:06:28.980 sys 0m0.176s 00:06:28.980 ************************************ 00:06:28.980 END TEST version 00:06:28.980 ************************************ 00:06:28.980 04:55:39 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.980 04:55:39 version -- common/autotest_common.sh@10 -- # set +x 00:06:28.980 04:55:39 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:28.980 04:55:39 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:28.980 04:55:39 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:28.980 04:55:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.980 04:55:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.980 04:55:39 -- common/autotest_common.sh@10 -- # set +x 00:06:28.980 ************************************ 00:06:28.980 START TEST bdev_raid 00:06:28.980 ************************************ 00:06:28.980 04:55:39 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:29.240 * Looking for test storage... 00:06:29.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:29.240 04:55:39 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:29.240 04:55:39 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:06:29.240 04:55:39 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:29.240 04:55:40 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.240 04:55:40 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:29.240 04:55:40 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.240 04:55:40 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:29.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.240 --rc genhtml_branch_coverage=1 00:06:29.240 --rc genhtml_function_coverage=1 00:06:29.240 --rc genhtml_legend=1 00:06:29.240 --rc geninfo_all_blocks=1 00:06:29.240 --rc geninfo_unexecuted_blocks=1 00:06:29.240 00:06:29.240 ' 00:06:29.240 04:55:40 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:29.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.240 --rc genhtml_branch_coverage=1 00:06:29.240 --rc genhtml_function_coverage=1 00:06:29.240 --rc genhtml_legend=1 00:06:29.240 --rc geninfo_all_blocks=1 00:06:29.240 --rc geninfo_unexecuted_blocks=1 00:06:29.240 00:06:29.240 ' 00:06:29.240 04:55:40 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:29.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.240 --rc genhtml_branch_coverage=1 00:06:29.240 --rc genhtml_function_coverage=1 00:06:29.240 --rc genhtml_legend=1 00:06:29.240 --rc geninfo_all_blocks=1 00:06:29.240 --rc geninfo_unexecuted_blocks=1 00:06:29.240 00:06:29.240 ' 00:06:29.240 04:55:40 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:29.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.240 --rc genhtml_branch_coverage=1 00:06:29.240 --rc genhtml_function_coverage=1 00:06:29.240 --rc genhtml_legend=1 00:06:29.240 --rc geninfo_all_blocks=1 00:06:29.240 --rc geninfo_unexecuted_blocks=1 00:06:29.240 00:06:29.240 ' 00:06:29.240 04:55:40 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:29.240 04:55:40 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:29.240 04:55:40 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:29.240 04:55:40 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:29.240 04:55:40 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:29.240 04:55:40 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:29.240 04:55:40 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:29.240 04:55:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.240 04:55:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.240 04:55:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:29.240 ************************************ 00:06:29.240 START TEST raid1_resize_data_offset_test 00:06:29.240 ************************************ 00:06:29.240 Process raid pid: 71622 00:06:29.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.240 04:55:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:06:29.240 04:55:40 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71622 00:06:29.240 04:55:40 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71622' 00:06:29.240 04:55:40 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71622 00:06:29.240 04:55:40 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:29.240 04:55:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 71622 ']' 00:06:29.240 04:55:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.240 04:55:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.240 04:55:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.240 04:55:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.240 04:55:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.500 [2024-12-14 04:55:40.153761] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:29.500 [2024-12-14 04:55:40.153941] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.500 [2024-12-14 04:55:40.315102] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.500 [2024-12-14 04:55:40.360245] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.759 [2024-12-14 04:55:40.402291] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:29.759 [2024-12-14 04:55:40.402420] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:30.330 04:55:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.330 04:55:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:06:30.330 04:55:40 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:30.330 04:55:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.330 04:55:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.330 malloc0 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.330 malloc1 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.330 null0 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.330 [2024-12-14 04:55:41.049917] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:30.330 [2024-12-14 04:55:41.051836] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:30.330 [2024-12-14 04:55:41.051923] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:30.330 [2024-12-14 04:55:41.052102] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:30.330 [2024-12-14 04:55:41.052174] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:30.330 [2024-12-14 04:55:41.052522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:06:30.330 [2024-12-14 04:55:41.052704] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:30.330 [2024-12-14 04:55:41.052749] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:30.330 [2024-12-14 04:55:41.052927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.330 [2024-12-14 04:55:41.109808] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.330 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.591 malloc2 00:06:30.591 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.591 04:55:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:30.591 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.591 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.591 [2024-12-14 04:55:41.233417] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:30.591 [2024-12-14 04:55:41.237746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:30.591 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.591 [2024-12-14 04:55:41.239622] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:30.591 04:55:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:30.591 04:55:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:30.591 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.591 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.591 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.591 04:55:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:30.591 04:55:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71622 00:06:30.591 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 71622 ']' 00:06:30.591 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 71622 00:06:30.591 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:06:30.591 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.591 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71622 00:06:30.591 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.591 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.591 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71622' 00:06:30.591 killing process with pid 71622 00:06:30.591 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 71622 00:06:30.591 [2024-12-14 04:55:41.331604] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:30.591 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 71622 00:06:30.591 [2024-12-14 04:55:41.333356] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:30.591 [2024-12-14 04:55:41.333467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:30.591 [2024-12-14 04:55:41.333509] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:30.591 [2024-12-14 04:55:41.338807] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:30.591 [2024-12-14 04:55:41.339143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:30.591 [2024-12-14 04:55:41.339221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:30.851 [2024-12-14 04:55:41.547433] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:31.110 04:55:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:31.111 00:06:31.111 real 0m1.709s 00:06:31.111 user 0m1.675s 00:06:31.111 sys 0m0.461s 00:06:31.111 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.111 04:55:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.111 ************************************ 00:06:31.111 END TEST raid1_resize_data_offset_test 00:06:31.111 ************************************ 00:06:31.111 04:55:41 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:31.111 04:55:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:31.111 04:55:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.111 04:55:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:31.111 ************************************ 00:06:31.111 START TEST raid0_resize_superblock_test 00:06:31.111 ************************************ 00:06:31.111 04:55:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:06:31.111 04:55:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:31.111 04:55:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71672 00:06:31.111 04:55:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:31.111 04:55:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71672' 00:06:31.111 Process raid pid: 71672 00:06:31.111 04:55:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71672 00:06:31.111 04:55:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71672 ']' 00:06:31.111 04:55:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.111 04:55:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.111 04:55:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.111 04:55:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.111 04:55:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.111 [2024-12-14 04:55:41.930366] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:31.111 [2024-12-14 04:55:41.930578] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.371 [2024-12-14 04:55:42.072933] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.371 [2024-12-14 04:55:42.119836] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.371 [2024-12-14 04:55:42.162598] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.371 [2024-12-14 04:55:42.162717] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.943 04:55:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.943 04:55:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:31.943 04:55:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:31.943 04:55:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.943 04:55:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.203 malloc0 00:06:32.203 04:55:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.203 04:55:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:32.203 04:55:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.203 04:55:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.203 [2024-12-14 04:55:42.871203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:32.203 [2024-12-14 04:55:42.871271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:32.203 [2024-12-14 04:55:42.871295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:32.203 [2024-12-14 04:55:42.871306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:32.203 [2024-12-14 04:55:42.873380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:32.203 [2024-12-14 04:55:42.873437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:32.203 pt0 00:06:32.203 04:55:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.203 04:55:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:32.203 04:55:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.203 04:55:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.203 b546afb9-2d2b-4ec9-b096-3c59b7e724a8 00:06:32.203 04:55:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.203 04:55:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:32.203 04:55:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.203 04:55:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.203 41dc3d6b-1321-4039-b1b8-dc5f20fc5a28 00:06:32.203 04:55:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.203 04:55:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:32.203 04:55:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.203 04:55:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.203 112c124e-4834-4012-85cd-f35fc129bbc0 00:06:32.203 04:55:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.203 04:55:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:32.203 04:55:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:32.203 04:55:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.203 04:55:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.203 [2024-12-14 04:55:43.006412] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 41dc3d6b-1321-4039-b1b8-dc5f20fc5a28 is claimed 00:06:32.203 [2024-12-14 04:55:43.006543] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 112c124e-4834-4012-85cd-f35fc129bbc0 is claimed 00:06:32.203 [2024-12-14 04:55:43.006674] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:32.203 [2024-12-14 04:55:43.006699] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:32.203 [2024-12-14 04:55:43.006944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:32.203 [2024-12-14 04:55:43.007099] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:32.203 [2024-12-14 04:55:43.007126] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:32.203 [2024-12-14 04:55:43.007299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:32.203 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.203 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:32.203 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.203 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:32.203 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.203 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.203 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:32.203 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:32.203 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.203 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.203 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:32.203 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.463 [2024-12-14 04:55:43.106527] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.463 [2024-12-14 04:55:43.150392] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:32.463 [2024-12-14 04:55:43.150461] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '41dc3d6b-1321-4039-b1b8-dc5f20fc5a28' was resized: old size 131072, new size 204800 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.463 [2024-12-14 04:55:43.162271] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:32.463 [2024-12-14 04:55:43.162293] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '112c124e-4834-4012-85cd-f35fc129bbc0' was resized: old size 131072, new size 204800 00:06:32.463 [2024-12-14 04:55:43.162318] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.463 [2024-12-14 04:55:43.274154] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.463 [2024-12-14 04:55:43.301954] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:32.463 [2024-12-14 04:55:43.302022] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:32.463 [2024-12-14 04:55:43.302033] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:32.463 [2024-12-14 04:55:43.302047] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:32.463 [2024-12-14 04:55:43.302177] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:32.463 [2024-12-14 04:55:43.302226] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:32.463 [2024-12-14 04:55:43.302237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:32.463 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.464 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:32.464 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.464 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.464 [2024-12-14 04:55:43.313873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:32.464 [2024-12-14 04:55:43.313947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:32.464 [2024-12-14 04:55:43.313968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:32.464 [2024-12-14 04:55:43.313991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:32.464 [2024-12-14 04:55:43.316203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:32.464 [2024-12-14 04:55:43.316277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:32.464 [2024-12-14 04:55:43.317774] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 41dc3d6b-1321-4039-b1b8-dc5f20fc5a28 00:06:32.464 [2024-12-14 04:55:43.317840] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 41dc3d6b-1321-4039-b1b8-dc5f20fc5a28 is claimed 00:06:32.464 [2024-12-14 04:55:43.317927] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 112c124e-4834-4012-85cd-f35fc129bbc0 00:06:32.464 [2024-12-14 04:55:43.317948] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 112c124e-4834-4012-85cd-f35fc129bbc0 is claimed 00:06:32.464 [2024-12-14 04:55:43.318031] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 112c124e-4834-4012-85cd-f35fc129bbc0 (2) smaller than existing raid bdev Raid (3) 00:06:32.464 [2024-12-14 04:55:43.318051] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 41dc3d6b-1321-4039-b1b8-dc5f20fc5a28: File exists 00:06:32.464 [2024-12-14 04:55:43.318088] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:06:32.464 [2024-12-14 04:55:43.318097] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:32.464 [2024-12-14 04:55:43.318333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:32.464 [2024-12-14 04:55:43.318463] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:06:32.464 [2024-12-14 04:55:43.318473] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:06:32.464 [2024-12-14 04:55:43.318608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:32.464 pt0 00:06:32.464 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.464 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:32.464 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.464 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.464 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.464 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:32.464 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:32.464 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:32.464 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:32.464 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.464 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.464 [2024-12-14 04:55:43.338301] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:32.724 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.724 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:32.724 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:32.724 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:32.724 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71672 00:06:32.724 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71672 ']' 00:06:32.724 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71672 00:06:32.724 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:32.724 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.724 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71672 00:06:32.724 killing process with pid 71672 00:06:32.724 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.724 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.724 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71672' 00:06:32.724 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71672 00:06:32.724 [2024-12-14 04:55:43.419968] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:32.724 [2024-12-14 04:55:43.420027] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:32.724 [2024-12-14 04:55:43.420065] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:32.724 [2024-12-14 04:55:43.420072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:06:32.724 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71672 00:06:32.724 [2024-12-14 04:55:43.579372] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:32.983 04:55:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:32.983 00:06:32.983 real 0m1.954s 00:06:32.983 user 0m2.199s 00:06:32.983 sys 0m0.486s 00:06:32.983 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.983 04:55:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.983 ************************************ 00:06:32.983 END TEST raid0_resize_superblock_test 00:06:32.983 ************************************ 00:06:32.983 04:55:43 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:32.983 04:55:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:32.983 04:55:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.983 04:55:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:33.242 ************************************ 00:06:33.242 START TEST raid1_resize_superblock_test 00:06:33.242 ************************************ 00:06:33.242 04:55:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:06:33.242 04:55:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:33.242 04:55:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71745 00:06:33.243 04:55:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:33.243 04:55:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71745' 00:06:33.243 Process raid pid: 71745 00:06:33.243 04:55:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71745 00:06:33.243 04:55:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71745 ']' 00:06:33.243 04:55:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.243 04:55:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.243 04:55:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.243 04:55:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.243 04:55:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.243 [2024-12-14 04:55:43.952997] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:33.243 [2024-12-14 04:55:43.953141] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:33.243 [2024-12-14 04:55:44.114632] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.502 [2024-12-14 04:55:44.162149] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.502 [2024-12-14 04:55:44.205040] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:33.502 [2024-12-14 04:55:44.205169] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:34.072 04:55:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.072 04:55:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:34.072 04:55:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:34.072 04:55:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.072 04:55:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.072 malloc0 00:06:34.072 04:55:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.072 04:55:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:34.072 04:55:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.072 04:55:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.072 [2024-12-14 04:55:44.900089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:34.072 [2024-12-14 04:55:44.900180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:34.072 [2024-12-14 04:55:44.900210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:34.072 [2024-12-14 04:55:44.900245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:34.072 [2024-12-14 04:55:44.902331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:34.072 [2024-12-14 04:55:44.902376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:34.072 pt0 00:06:34.072 04:55:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.072 04:55:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:34.072 04:55:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.072 04:55:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.333 b6c9c206-1b97-44be-887d-d5125d91766c 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.333 e1261b52-8fa6-40af-be23-a3c0e0b2138d 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.333 60e39e2f-7ec9-4418-b89c-ec9647365fc8 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.333 [2024-12-14 04:55:45.035554] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev e1261b52-8fa6-40af-be23-a3c0e0b2138d is claimed 00:06:34.333 [2024-12-14 04:55:45.035640] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 60e39e2f-7ec9-4418-b89c-ec9647365fc8 is claimed 00:06:34.333 [2024-12-14 04:55:45.035747] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:34.333 [2024-12-14 04:55:45.035759] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:34.333 [2024-12-14 04:55:45.036011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:34.333 [2024-12-14 04:55:45.036194] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:34.333 [2024-12-14 04:55:45.036211] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:34.333 [2024-12-14 04:55:45.036374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:34.333 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:34.334 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:34.334 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:34.334 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:34.334 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.334 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.334 [2024-12-14 04:55:45.127618] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:34.334 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.334 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:34.334 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:34.334 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:34.334 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:34.334 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.334 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.334 [2024-12-14 04:55:45.175480] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:34.334 [2024-12-14 04:55:45.175507] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e1261b52-8fa6-40af-be23-a3c0e0b2138d' was resized: old size 131072, new size 204800 00:06:34.334 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.334 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:34.334 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.334 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.334 [2024-12-14 04:55:45.187383] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:34.334 [2024-12-14 04:55:45.187406] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '60e39e2f-7ec9-4418-b89c-ec9647365fc8' was resized: old size 131072, new size 204800 00:06:34.334 [2024-12-14 04:55:45.187435] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:34.334 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.334 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:34.334 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.334 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:34.334 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.595 [2024-12-14 04:55:45.303311] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.595 [2024-12-14 04:55:45.331085] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:34.595 [2024-12-14 04:55:45.331195] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:34.595 [2024-12-14 04:55:45.331227] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:34.595 [2024-12-14 04:55:45.331405] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:34.595 [2024-12-14 04:55:45.331570] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:34.595 [2024-12-14 04:55:45.331632] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:34.595 [2024-12-14 04:55:45.331652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.595 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.595 [2024-12-14 04:55:45.342996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:34.595 [2024-12-14 04:55:45.343103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:34.595 [2024-12-14 04:55:45.343126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:34.595 [2024-12-14 04:55:45.343139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:34.595 [2024-12-14 04:55:45.345255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:34.595 [2024-12-14 04:55:45.345287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:34.596 [2024-12-14 04:55:45.346665] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e1261b52-8fa6-40af-be23-a3c0e0b2138d 00:06:34.596 [2024-12-14 04:55:45.346720] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev e1261b52-8fa6-40af-be23-a3c0e0b2138d is claimed 00:06:34.596 [2024-12-14 04:55:45.346800] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 60e39e2f-7ec9-4418-b89c-ec9647365fc8 00:06:34.596 [2024-12-14 04:55:45.346822] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 60e39e2f-7ec9-4418-b89c-ec9647365fc8 is claimed 00:06:34.596 [2024-12-14 04:55:45.346899] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 60e39e2f-7ec9-4418-b89c-ec9647365fc8 (2) smaller than existing raid bdev Raid (3) 00:06:34.596 [2024-12-14 04:55:45.346917] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev e1261b52-8fa6-40af-be23-a3c0e0b2138d: File exists 00:06:34.596 [2024-12-14 04:55:45.346957] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:06:34.596 [2024-12-14 04:55:45.346966] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:34.596 [2024-12-14 04:55:45.347194] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:34.596 [2024-12-14 04:55:45.347314] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:06:34.596 [2024-12-14 04:55:45.347327] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:06:34.596 [2024-12-14 04:55:45.347488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:34.596 pt0 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.596 [2024-12-14 04:55:45.371604] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71745 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71745 ']' 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71745 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71745 00:06:34.596 killing process with pid 71745 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71745' 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71745 00:06:34.596 [2024-12-14 04:55:45.452649] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:34.596 [2024-12-14 04:55:45.452710] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:34.596 [2024-12-14 04:55:45.452754] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:34.596 [2024-12-14 04:55:45.452762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:06:34.596 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71745 00:06:34.856 [2024-12-14 04:55:45.611744] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:35.116 04:55:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:35.116 00:06:35.116 real 0m1.956s 00:06:35.116 user 0m2.208s 00:06:35.116 sys 0m0.477s 00:06:35.116 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.116 ************************************ 00:06:35.116 END TEST raid1_resize_superblock_test 00:06:35.116 ************************************ 00:06:35.116 04:55:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.116 04:55:45 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:35.116 04:55:45 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:35.116 04:55:45 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:35.116 04:55:45 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:35.116 04:55:45 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:35.116 04:55:45 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:35.116 04:55:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:35.116 04:55:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.116 04:55:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:35.116 ************************************ 00:06:35.116 START TEST raid_function_test_raid0 00:06:35.116 ************************************ 00:06:35.116 04:55:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:06:35.116 04:55:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:35.116 04:55:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:35.116 04:55:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:35.116 Process raid pid: 71821 00:06:35.116 04:55:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=71821 00:06:35.116 04:55:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:35.116 04:55:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71821' 00:06:35.116 04:55:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 71821 00:06:35.116 04:55:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 71821 ']' 00:06:35.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.116 04:55:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.116 04:55:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.116 04:55:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.116 04:55:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.116 04:55:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:35.376 [2024-12-14 04:55:46.001466] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:35.376 [2024-12-14 04:55:46.001592] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:35.376 [2024-12-14 04:55:46.161464] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.376 [2024-12-14 04:55:46.208967] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.376 [2024-12-14 04:55:46.251850] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:35.376 [2024-12-14 04:55:46.251888] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:35.945 04:55:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.945 04:55:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:06:35.945 04:55:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:35.945 04:55:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.945 04:55:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:36.205 Base_1 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:36.205 Base_2 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:36.205 [2024-12-14 04:55:46.874993] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:36.205 [2024-12-14 04:55:46.878550] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:36.205 [2024-12-14 04:55:46.878663] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:36.205 [2024-12-14 04:55:46.878686] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:36.205 [2024-12-14 04:55:46.879230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:36.205 [2024-12-14 04:55:46.879468] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:36.205 [2024-12-14 04:55:46.879498] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:06:36.205 [2024-12-14 04:55:46.879809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:36.205 04:55:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:36.465 [2024-12-14 04:55:47.087392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:36.465 /dev/nbd0 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.465 1+0 records in 00:06:36.465 1+0 records out 00:06:36.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496806 s, 8.2 MB/s 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.465 { 00:06:36.465 "nbd_device": "/dev/nbd0", 00:06:36.465 "bdev_name": "raid" 00:06:36.465 } 00:06:36.465 ]' 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.465 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.465 { 00:06:36.465 "nbd_device": "/dev/nbd0", 00:06:36.465 "bdev_name": "raid" 00:06:36.465 } 00:06:36.465 ]' 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:36.725 4096+0 records in 00:06:36.725 4096+0 records out 00:06:36.725 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0355144 s, 59.1 MB/s 00:06:36.725 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:37.019 4096+0 records in 00:06:37.019 4096+0 records out 00:06:37.019 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.19927 s, 10.5 MB/s 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:37.019 128+0 records in 00:06:37.019 128+0 records out 00:06:37.019 65536 bytes (66 kB, 64 KiB) copied, 0.000415145 s, 158 MB/s 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:37.019 2035+0 records in 00:06:37.019 2035+0 records out 00:06:37.019 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0156693 s, 66.5 MB/s 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:37.019 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:37.019 456+0 records in 00:06:37.019 456+0 records out 00:06:37.020 233472 bytes (233 kB, 228 KiB) copied, 0.00366827 s, 63.6 MB/s 00:06:37.020 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:37.020 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:37.020 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:37.020 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:37.020 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:37.020 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:37.020 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:37.020 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:37.020 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:37.020 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.020 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:37.020 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.020 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:37.281 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.281 [2024-12-14 04:55:47.949051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:37.281 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.281 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.281 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.281 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.281 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.281 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:37.281 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.281 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:37.281 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:37.281 04:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:37.281 04:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.281 04:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.281 04:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.541 04:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.541 04:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.541 04:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.541 04:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:37.541 04:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.541 04:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.541 04:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:37.541 04:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:37.541 04:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 71821 00:06:37.541 04:55:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 71821 ']' 00:06:37.541 04:55:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 71821 00:06:37.541 04:55:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:06:37.541 04:55:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.541 04:55:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71821 00:06:37.541 killing process with pid 71821 00:06:37.541 04:55:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.541 04:55:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.541 04:55:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71821' 00:06:37.541 04:55:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 71821 00:06:37.541 [2024-12-14 04:55:48.261432] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:37.541 [2024-12-14 04:55:48.261548] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:37.541 04:55:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 71821 00:06:37.541 [2024-12-14 04:55:48.261603] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:37.541 [2024-12-14 04:55:48.261617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:06:37.541 [2024-12-14 04:55:48.285147] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:37.801 04:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:37.801 00:06:37.801 real 0m2.610s 00:06:37.801 user 0m3.164s 00:06:37.801 sys 0m0.893s 00:06:37.801 04:55:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.801 ************************************ 00:06:37.801 END TEST raid_function_test_raid0 00:06:37.801 ************************************ 00:06:37.801 04:55:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:37.801 04:55:48 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:37.801 04:55:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:37.801 04:55:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.801 04:55:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:37.801 ************************************ 00:06:37.801 START TEST raid_function_test_concat 00:06:37.801 ************************************ 00:06:37.801 04:55:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:06:37.801 04:55:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:37.801 04:55:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:37.801 04:55:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:37.801 Process raid pid: 71940 00:06:37.801 04:55:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=71940 00:06:37.801 04:55:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:37.801 04:55:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71940' 00:06:37.801 04:55:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 71940 00:06:37.801 04:55:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 71940 ']' 00:06:37.801 04:55:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.801 04:55:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.801 04:55:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.801 04:55:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.801 04:55:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:37.801 [2024-12-14 04:55:48.673388] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:37.801 [2024-12-14 04:55:48.673599] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.060 [2024-12-14 04:55:48.833651] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.060 [2024-12-14 04:55:48.878833] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.060 [2024-12-14 04:55:48.921550] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:38.060 [2024-12-14 04:55:48.921585] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:38.628 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.628 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:06:38.628 04:55:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:38.628 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.628 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:38.628 Base_1 00:06:38.628 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.628 04:55:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:38.628 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.628 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:38.887 Base_2 00:06:38.887 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.887 04:55:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:38.887 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.887 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:38.887 [2024-12-14 04:55:49.538647] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:38.887 [2024-12-14 04:55:49.541892] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:38.887 [2024-12-14 04:55:49.542048] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:38.887 [2024-12-14 04:55:49.542115] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:38.887 [2024-12-14 04:55:49.542588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:38.887 [2024-12-14 04:55:49.542862] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:38.887 [2024-12-14 04:55:49.542940] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:06:38.887 [2024-12-14 04:55:49.543321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:38.887 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.887 04:55:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:38.887 04:55:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:38.887 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.887 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:38.887 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.887 04:55:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:38.887 04:55:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:38.888 04:55:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:38.888 04:55:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:38.888 04:55:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:38.888 04:55:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:38.888 04:55:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:38.888 04:55:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:38.888 04:55:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:38.888 04:55:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:38.888 04:55:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:38.888 04:55:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:39.147 [2024-12-14 04:55:49.774829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:39.147 /dev/nbd0 00:06:39.147 04:55:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.147 04:55:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.147 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:39.147 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:06:39.147 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:39.147 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:39.147 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:39.147 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:06:39.147 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:39.147 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:39.147 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:39.147 1+0 records in 00:06:39.147 1+0 records out 00:06:39.147 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531717 s, 7.7 MB/s 00:06:39.147 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.147 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:06:39.147 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.147 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:39.147 04:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:06:39.147 04:55:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.147 04:55:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:39.147 04:55:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:39.147 04:55:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:39.147 04:55:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.407 { 00:06:39.407 "nbd_device": "/dev/nbd0", 00:06:39.407 "bdev_name": "raid" 00:06:39.407 } 00:06:39.407 ]' 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.407 { 00:06:39.407 "nbd_device": "/dev/nbd0", 00:06:39.407 "bdev_name": "raid" 00:06:39.407 } 00:06:39.407 ]' 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:39.407 4096+0 records in 00:06:39.407 4096+0 records out 00:06:39.407 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0344597 s, 60.9 MB/s 00:06:39.407 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:39.665 4096+0 records in 00:06:39.665 4096+0 records out 00:06:39.665 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.180492 s, 11.6 MB/s 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:39.665 128+0 records in 00:06:39.665 128+0 records out 00:06:39.665 65536 bytes (66 kB, 64 KiB) copied, 0.00122069 s, 53.7 MB/s 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:39.665 2035+0 records in 00:06:39.665 2035+0 records out 00:06:39.665 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0141475 s, 73.6 MB/s 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:39.665 456+0 records in 00:06:39.665 456+0 records out 00:06:39.665 233472 bytes (233 kB, 228 KiB) copied, 0.00313186 s, 74.5 MB/s 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.665 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:39.924 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:39.924 [2024-12-14 04:55:50.676280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:39.924 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:39.924 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:39.924 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.924 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.924 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:39.924 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:39.924 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.924 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:39.924 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:39.924 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:40.184 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.184 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.184 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.184 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.184 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.184 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.184 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:40.184 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.184 04:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.184 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:40.184 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:40.184 04:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 71940 00:06:40.184 04:55:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 71940 ']' 00:06:40.184 04:55:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 71940 00:06:40.184 04:55:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:06:40.184 04:55:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.184 04:55:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71940 00:06:40.184 killing process with pid 71940 00:06:40.184 04:55:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.184 04:55:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.184 04:55:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71940' 00:06:40.184 04:55:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 71940 00:06:40.184 [2024-12-14 04:55:50.955731] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:40.184 [2024-12-14 04:55:50.955835] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:40.184 04:55:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 71940 00:06:40.184 [2024-12-14 04:55:50.955891] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:40.184 [2024-12-14 04:55:50.955903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:06:40.184 [2024-12-14 04:55:50.978911] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:40.444 04:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:40.444 00:06:40.444 real 0m2.628s 00:06:40.444 user 0m3.146s 00:06:40.444 sys 0m0.961s 00:06:40.444 04:55:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.444 ************************************ 00:06:40.444 END TEST raid_function_test_concat 00:06:40.444 ************************************ 00:06:40.444 04:55:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:40.444 04:55:51 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:40.444 04:55:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:40.444 04:55:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.444 04:55:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:40.444 ************************************ 00:06:40.444 START TEST raid0_resize_test 00:06:40.444 ************************************ 00:06:40.444 04:55:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:06:40.444 04:55:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:40.444 04:55:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:40.444 04:55:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:40.444 04:55:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:40.444 04:55:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:40.444 04:55:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:40.444 04:55:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:40.444 04:55:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:40.444 04:55:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72047 00:06:40.444 04:55:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:40.444 04:55:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72047' 00:06:40.444 Process raid pid: 72047 00:06:40.444 04:55:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72047 00:06:40.444 04:55:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72047 ']' 00:06:40.444 04:55:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.444 04:55:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.444 04:55:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.444 04:55:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.444 04:55:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.704 [2024-12-14 04:55:51.378837] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:40.704 [2024-12-14 04:55:51.379051] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:40.704 [2024-12-14 04:55:51.536621] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.704 [2024-12-14 04:55:51.581979] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.963 [2024-12-14 04:55:51.624802] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.963 [2024-12-14 04:55:51.624834] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.532 Base_1 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.532 Base_2 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.532 [2024-12-14 04:55:52.202841] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:41.532 [2024-12-14 04:55:52.204638] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:41.532 [2024-12-14 04:55:52.204745] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:41.532 [2024-12-14 04:55:52.204768] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:41.532 [2024-12-14 04:55:52.205015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:06:41.532 [2024-12-14 04:55:52.205115] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:41.532 [2024-12-14 04:55:52.205123] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:41.532 [2024-12-14 04:55:52.205251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.532 [2024-12-14 04:55:52.210793] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:41.532 [2024-12-14 04:55:52.210822] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:41.532 true 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.532 [2024-12-14 04:55:52.222944] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.532 [2024-12-14 04:55:52.270687] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:41.532 [2024-12-14 04:55:52.270708] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:41.532 [2024-12-14 04:55:52.270736] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:41.532 true 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:41.532 [2024-12-14 04:55:52.282821] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72047 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72047 ']' 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 72047 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72047 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.532 killing process with pid 72047 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72047' 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 72047 00:06:41.532 [2024-12-14 04:55:52.373878] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:41.532 [2024-12-14 04:55:52.373959] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:41.532 [2024-12-14 04:55:52.374001] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:41.532 [2024-12-14 04:55:52.374016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:41.532 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 72047 00:06:41.532 [2024-12-14 04:55:52.375549] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:41.792 ************************************ 00:06:41.792 END TEST raid0_resize_test 00:06:41.792 ************************************ 00:06:41.792 04:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:41.792 00:06:41.792 real 0m1.326s 00:06:41.792 user 0m1.454s 00:06:41.792 sys 0m0.318s 00:06:41.792 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.792 04:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.052 04:55:52 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:42.052 04:55:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:42.052 04:55:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.052 04:55:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:42.052 ************************************ 00:06:42.052 START TEST raid1_resize_test 00:06:42.052 ************************************ 00:06:42.052 04:55:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:06:42.052 04:55:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:42.052 04:55:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:42.052 04:55:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:42.052 04:55:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:42.052 04:55:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:42.052 04:55:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:42.052 04:55:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:42.052 04:55:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:42.052 04:55:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72097 00:06:42.052 04:55:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:42.052 04:55:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72097' 00:06:42.052 Process raid pid: 72097 00:06:42.052 04:55:52 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72097 00:06:42.052 04:55:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72097 ']' 00:06:42.052 04:55:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.052 04:55:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.052 04:55:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.052 04:55:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.052 04:55:52 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.052 [2024-12-14 04:55:52.774581] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:42.052 [2024-12-14 04:55:52.774794] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.312 [2024-12-14 04:55:52.936902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.312 [2024-12-14 04:55:52.983366] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.312 [2024-12-14 04:55:53.025990] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.312 [2024-12-14 04:55:53.026029] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.880 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.880 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:06:42.880 04:55:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:42.880 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.880 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.880 Base_1 00:06:42.880 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.880 04:55:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:42.880 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.880 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.880 Base_2 00:06:42.880 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.880 04:55:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:42.880 04:55:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:42.880 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.880 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.880 [2024-12-14 04:55:53.619702] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:42.880 [2024-12-14 04:55:53.621481] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:42.880 [2024-12-14 04:55:53.621533] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:42.880 [2024-12-14 04:55:53.621543] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:42.880 [2024-12-14 04:55:53.621773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:06:42.880 [2024-12-14 04:55:53.621881] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:42.881 [2024-12-14 04:55:53.621889] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:42.881 [2024-12-14 04:55:53.621990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.881 [2024-12-14 04:55:53.631660] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:42.881 [2024-12-14 04:55:53.631690] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:42.881 true 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.881 [2024-12-14 04:55:53.647798] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.881 [2024-12-14 04:55:53.687555] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:42.881 [2024-12-14 04:55:53.687576] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:42.881 [2024-12-14 04:55:53.687603] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:42.881 true 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:42.881 [2024-12-14 04:55:53.699716] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72097 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72097 ']' 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 72097 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.881 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72097 00:06:43.141 killing process with pid 72097 00:06:43.141 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.141 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.141 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72097' 00:06:43.141 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 72097 00:06:43.141 [2024-12-14 04:55:53.768925] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:43.141 [2024-12-14 04:55:53.768993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.141 04:55:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 72097 00:06:43.141 [2024-12-14 04:55:53.769420] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:43.141 [2024-12-14 04:55:53.769461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:43.141 [2024-12-14 04:55:53.770580] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:43.141 ************************************ 00:06:43.141 END TEST raid1_resize_test 00:06:43.141 ************************************ 00:06:43.141 04:55:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:43.141 00:06:43.141 real 0m1.324s 00:06:43.141 user 0m1.458s 00:06:43.141 sys 0m0.297s 00:06:43.141 04:55:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.141 04:55:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.401 04:55:54 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:43.401 04:55:54 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:43.401 04:55:54 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:43.401 04:55:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:43.401 04:55:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.401 04:55:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:43.401 ************************************ 00:06:43.401 START TEST raid_state_function_test 00:06:43.401 ************************************ 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:43.401 Process raid pid: 72148 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72148 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72148' 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72148 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 72148 ']' 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.401 04:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.401 [2024-12-14 04:55:54.176745] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:43.401 [2024-12-14 04:55:54.176949] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:43.661 [2024-12-14 04:55:54.335866] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.661 [2024-12-14 04:55:54.379978] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.661 [2024-12-14 04:55:54.422388] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:43.661 [2024-12-14 04:55:54.422425] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:44.229 04:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.229 04:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:06:44.229 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:44.229 04:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.229 04:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.229 [2024-12-14 04:55:54.991917] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:44.229 [2024-12-14 04:55:54.991965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:44.229 [2024-12-14 04:55:54.991977] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:44.229 [2024-12-14 04:55:54.991986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:44.229 04:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.229 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:44.229 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:44.229 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:44.229 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:44.229 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:44.229 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:44.229 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:44.229 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:44.229 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:44.229 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:44.229 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.229 04:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:44.229 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.229 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.229 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.229 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:44.229 "name": "Existed_Raid", 00:06:44.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.229 "strip_size_kb": 64, 00:06:44.229 "state": "configuring", 00:06:44.229 "raid_level": "raid0", 00:06:44.229 "superblock": false, 00:06:44.229 "num_base_bdevs": 2, 00:06:44.229 "num_base_bdevs_discovered": 0, 00:06:44.229 "num_base_bdevs_operational": 2, 00:06:44.229 "base_bdevs_list": [ 00:06:44.229 { 00:06:44.229 "name": "BaseBdev1", 00:06:44.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.229 "is_configured": false, 00:06:44.229 "data_offset": 0, 00:06:44.229 "data_size": 0 00:06:44.229 }, 00:06:44.229 { 00:06:44.229 "name": "BaseBdev2", 00:06:44.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.229 "is_configured": false, 00:06:44.229 "data_offset": 0, 00:06:44.229 "data_size": 0 00:06:44.229 } 00:06:44.229 ] 00:06:44.229 }' 00:06:44.230 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:44.230 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.797 [2024-12-14 04:55:55.375226] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:44.797 [2024-12-14 04:55:55.375268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.797 [2024-12-14 04:55:55.387270] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:44.797 [2024-12-14 04:55:55.387343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:44.797 [2024-12-14 04:55:55.387370] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:44.797 [2024-12-14 04:55:55.387392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.797 [2024-12-14 04:55:55.408000] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:44.797 BaseBdev1 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.797 [ 00:06:44.797 { 00:06:44.797 "name": "BaseBdev1", 00:06:44.797 "aliases": [ 00:06:44.797 "3701a594-3501-4ad3-99be-b45113e51869" 00:06:44.797 ], 00:06:44.797 "product_name": "Malloc disk", 00:06:44.797 "block_size": 512, 00:06:44.797 "num_blocks": 65536, 00:06:44.797 "uuid": "3701a594-3501-4ad3-99be-b45113e51869", 00:06:44.797 "assigned_rate_limits": { 00:06:44.797 "rw_ios_per_sec": 0, 00:06:44.797 "rw_mbytes_per_sec": 0, 00:06:44.797 "r_mbytes_per_sec": 0, 00:06:44.797 "w_mbytes_per_sec": 0 00:06:44.797 }, 00:06:44.797 "claimed": true, 00:06:44.797 "claim_type": "exclusive_write", 00:06:44.797 "zoned": false, 00:06:44.797 "supported_io_types": { 00:06:44.797 "read": true, 00:06:44.797 "write": true, 00:06:44.797 "unmap": true, 00:06:44.797 "flush": true, 00:06:44.797 "reset": true, 00:06:44.797 "nvme_admin": false, 00:06:44.797 "nvme_io": false, 00:06:44.797 "nvme_io_md": false, 00:06:44.797 "write_zeroes": true, 00:06:44.797 "zcopy": true, 00:06:44.797 "get_zone_info": false, 00:06:44.797 "zone_management": false, 00:06:44.797 "zone_append": false, 00:06:44.797 "compare": false, 00:06:44.797 "compare_and_write": false, 00:06:44.797 "abort": true, 00:06:44.797 "seek_hole": false, 00:06:44.797 "seek_data": false, 00:06:44.797 "copy": true, 00:06:44.797 "nvme_iov_md": false 00:06:44.797 }, 00:06:44.797 "memory_domains": [ 00:06:44.797 { 00:06:44.797 "dma_device_id": "system", 00:06:44.797 "dma_device_type": 1 00:06:44.797 }, 00:06:44.797 { 00:06:44.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.797 "dma_device_type": 2 00:06:44.797 } 00:06:44.797 ], 00:06:44.797 "driver_specific": {} 00:06:44.797 } 00:06:44.797 ] 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:44.797 "name": "Existed_Raid", 00:06:44.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.797 "strip_size_kb": 64, 00:06:44.797 "state": "configuring", 00:06:44.797 "raid_level": "raid0", 00:06:44.797 "superblock": false, 00:06:44.797 "num_base_bdevs": 2, 00:06:44.797 "num_base_bdevs_discovered": 1, 00:06:44.797 "num_base_bdevs_operational": 2, 00:06:44.797 "base_bdevs_list": [ 00:06:44.797 { 00:06:44.797 "name": "BaseBdev1", 00:06:44.797 "uuid": "3701a594-3501-4ad3-99be-b45113e51869", 00:06:44.797 "is_configured": true, 00:06:44.797 "data_offset": 0, 00:06:44.797 "data_size": 65536 00:06:44.797 }, 00:06:44.797 { 00:06:44.797 "name": "BaseBdev2", 00:06:44.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.797 "is_configured": false, 00:06:44.797 "data_offset": 0, 00:06:44.797 "data_size": 0 00:06:44.797 } 00:06:44.797 ] 00:06:44.797 }' 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:44.797 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.057 [2024-12-14 04:55:55.863237] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:45.057 [2024-12-14 04:55:55.863320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.057 [2024-12-14 04:55:55.875265] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:45.057 [2024-12-14 04:55:55.877041] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:45.057 [2024-12-14 04:55:55.877079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:45.057 "name": "Existed_Raid", 00:06:45.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:45.057 "strip_size_kb": 64, 00:06:45.057 "state": "configuring", 00:06:45.057 "raid_level": "raid0", 00:06:45.057 "superblock": false, 00:06:45.057 "num_base_bdevs": 2, 00:06:45.057 "num_base_bdevs_discovered": 1, 00:06:45.057 "num_base_bdevs_operational": 2, 00:06:45.057 "base_bdevs_list": [ 00:06:45.057 { 00:06:45.057 "name": "BaseBdev1", 00:06:45.057 "uuid": "3701a594-3501-4ad3-99be-b45113e51869", 00:06:45.057 "is_configured": true, 00:06:45.057 "data_offset": 0, 00:06:45.057 "data_size": 65536 00:06:45.057 }, 00:06:45.057 { 00:06:45.057 "name": "BaseBdev2", 00:06:45.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:45.057 "is_configured": false, 00:06:45.057 "data_offset": 0, 00:06:45.057 "data_size": 0 00:06:45.057 } 00:06:45.057 ] 00:06:45.057 }' 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:45.057 04:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.625 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:45.625 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.625 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.625 [2024-12-14 04:55:56.338897] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:45.626 [2024-12-14 04:55:56.339149] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:06:45.626 [2024-12-14 04:55:56.339261] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:45.626 [2024-12-14 04:55:56.340115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:45.626 [2024-12-14 04:55:56.340655] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:06:45.626 [2024-12-14 04:55:56.340794] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:06:45.626 [2024-12-14 04:55:56.341517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:45.626 BaseBdev2 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.626 [ 00:06:45.626 { 00:06:45.626 "name": "BaseBdev2", 00:06:45.626 "aliases": [ 00:06:45.626 "bc32b157-75fc-40a1-bbf1-356b7b6ddbb0" 00:06:45.626 ], 00:06:45.626 "product_name": "Malloc disk", 00:06:45.626 "block_size": 512, 00:06:45.626 "num_blocks": 65536, 00:06:45.626 "uuid": "bc32b157-75fc-40a1-bbf1-356b7b6ddbb0", 00:06:45.626 "assigned_rate_limits": { 00:06:45.626 "rw_ios_per_sec": 0, 00:06:45.626 "rw_mbytes_per_sec": 0, 00:06:45.626 "r_mbytes_per_sec": 0, 00:06:45.626 "w_mbytes_per_sec": 0 00:06:45.626 }, 00:06:45.626 "claimed": true, 00:06:45.626 "claim_type": "exclusive_write", 00:06:45.626 "zoned": false, 00:06:45.626 "supported_io_types": { 00:06:45.626 "read": true, 00:06:45.626 "write": true, 00:06:45.626 "unmap": true, 00:06:45.626 "flush": true, 00:06:45.626 "reset": true, 00:06:45.626 "nvme_admin": false, 00:06:45.626 "nvme_io": false, 00:06:45.626 "nvme_io_md": false, 00:06:45.626 "write_zeroes": true, 00:06:45.626 "zcopy": true, 00:06:45.626 "get_zone_info": false, 00:06:45.626 "zone_management": false, 00:06:45.626 "zone_append": false, 00:06:45.626 "compare": false, 00:06:45.626 "compare_and_write": false, 00:06:45.626 "abort": true, 00:06:45.626 "seek_hole": false, 00:06:45.626 "seek_data": false, 00:06:45.626 "copy": true, 00:06:45.626 "nvme_iov_md": false 00:06:45.626 }, 00:06:45.626 "memory_domains": [ 00:06:45.626 { 00:06:45.626 "dma_device_id": "system", 00:06:45.626 "dma_device_type": 1 00:06:45.626 }, 00:06:45.626 { 00:06:45.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:45.626 "dma_device_type": 2 00:06:45.626 } 00:06:45.626 ], 00:06:45.626 "driver_specific": {} 00:06:45.626 } 00:06:45.626 ] 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:45.626 "name": "Existed_Raid", 00:06:45.626 "uuid": "d876f233-bd15-4a95-a18d-3d8ef7fc0993", 00:06:45.626 "strip_size_kb": 64, 00:06:45.626 "state": "online", 00:06:45.626 "raid_level": "raid0", 00:06:45.626 "superblock": false, 00:06:45.626 "num_base_bdevs": 2, 00:06:45.626 "num_base_bdevs_discovered": 2, 00:06:45.626 "num_base_bdevs_operational": 2, 00:06:45.626 "base_bdevs_list": [ 00:06:45.626 { 00:06:45.626 "name": "BaseBdev1", 00:06:45.626 "uuid": "3701a594-3501-4ad3-99be-b45113e51869", 00:06:45.626 "is_configured": true, 00:06:45.626 "data_offset": 0, 00:06:45.626 "data_size": 65536 00:06:45.626 }, 00:06:45.626 { 00:06:45.626 "name": "BaseBdev2", 00:06:45.626 "uuid": "bc32b157-75fc-40a1-bbf1-356b7b6ddbb0", 00:06:45.626 "is_configured": true, 00:06:45.626 "data_offset": 0, 00:06:45.626 "data_size": 65536 00:06:45.626 } 00:06:45.626 ] 00:06:45.626 }' 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:45.626 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.196 [2024-12-14 04:55:56.806366] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:46.196 "name": "Existed_Raid", 00:06:46.196 "aliases": [ 00:06:46.196 "d876f233-bd15-4a95-a18d-3d8ef7fc0993" 00:06:46.196 ], 00:06:46.196 "product_name": "Raid Volume", 00:06:46.196 "block_size": 512, 00:06:46.196 "num_blocks": 131072, 00:06:46.196 "uuid": "d876f233-bd15-4a95-a18d-3d8ef7fc0993", 00:06:46.196 "assigned_rate_limits": { 00:06:46.196 "rw_ios_per_sec": 0, 00:06:46.196 "rw_mbytes_per_sec": 0, 00:06:46.196 "r_mbytes_per_sec": 0, 00:06:46.196 "w_mbytes_per_sec": 0 00:06:46.196 }, 00:06:46.196 "claimed": false, 00:06:46.196 "zoned": false, 00:06:46.196 "supported_io_types": { 00:06:46.196 "read": true, 00:06:46.196 "write": true, 00:06:46.196 "unmap": true, 00:06:46.196 "flush": true, 00:06:46.196 "reset": true, 00:06:46.196 "nvme_admin": false, 00:06:46.196 "nvme_io": false, 00:06:46.196 "nvme_io_md": false, 00:06:46.196 "write_zeroes": true, 00:06:46.196 "zcopy": false, 00:06:46.196 "get_zone_info": false, 00:06:46.196 "zone_management": false, 00:06:46.196 "zone_append": false, 00:06:46.196 "compare": false, 00:06:46.196 "compare_and_write": false, 00:06:46.196 "abort": false, 00:06:46.196 "seek_hole": false, 00:06:46.196 "seek_data": false, 00:06:46.196 "copy": false, 00:06:46.196 "nvme_iov_md": false 00:06:46.196 }, 00:06:46.196 "memory_domains": [ 00:06:46.196 { 00:06:46.196 "dma_device_id": "system", 00:06:46.196 "dma_device_type": 1 00:06:46.196 }, 00:06:46.196 { 00:06:46.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.196 "dma_device_type": 2 00:06:46.196 }, 00:06:46.196 { 00:06:46.196 "dma_device_id": "system", 00:06:46.196 "dma_device_type": 1 00:06:46.196 }, 00:06:46.196 { 00:06:46.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.196 "dma_device_type": 2 00:06:46.196 } 00:06:46.196 ], 00:06:46.196 "driver_specific": { 00:06:46.196 "raid": { 00:06:46.196 "uuid": "d876f233-bd15-4a95-a18d-3d8ef7fc0993", 00:06:46.196 "strip_size_kb": 64, 00:06:46.196 "state": "online", 00:06:46.196 "raid_level": "raid0", 00:06:46.196 "superblock": false, 00:06:46.196 "num_base_bdevs": 2, 00:06:46.196 "num_base_bdevs_discovered": 2, 00:06:46.196 "num_base_bdevs_operational": 2, 00:06:46.196 "base_bdevs_list": [ 00:06:46.196 { 00:06:46.196 "name": "BaseBdev1", 00:06:46.196 "uuid": "3701a594-3501-4ad3-99be-b45113e51869", 00:06:46.196 "is_configured": true, 00:06:46.196 "data_offset": 0, 00:06:46.196 "data_size": 65536 00:06:46.196 }, 00:06:46.196 { 00:06:46.196 "name": "BaseBdev2", 00:06:46.196 "uuid": "bc32b157-75fc-40a1-bbf1-356b7b6ddbb0", 00:06:46.196 "is_configured": true, 00:06:46.196 "data_offset": 0, 00:06:46.196 "data_size": 65536 00:06:46.196 } 00:06:46.196 ] 00:06:46.196 } 00:06:46.196 } 00:06:46.196 }' 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:46.196 BaseBdev2' 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:46.196 04:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.196 [2024-12-14 04:55:57.005786] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:46.196 [2024-12-14 04:55:57.005812] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:46.196 [2024-12-14 04:55:57.005867] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:46.196 "name": "Existed_Raid", 00:06:46.196 "uuid": "d876f233-bd15-4a95-a18d-3d8ef7fc0993", 00:06:46.196 "strip_size_kb": 64, 00:06:46.196 "state": "offline", 00:06:46.196 "raid_level": "raid0", 00:06:46.196 "superblock": false, 00:06:46.196 "num_base_bdevs": 2, 00:06:46.196 "num_base_bdevs_discovered": 1, 00:06:46.196 "num_base_bdevs_operational": 1, 00:06:46.196 "base_bdevs_list": [ 00:06:46.196 { 00:06:46.196 "name": null, 00:06:46.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:46.196 "is_configured": false, 00:06:46.196 "data_offset": 0, 00:06:46.196 "data_size": 65536 00:06:46.196 }, 00:06:46.196 { 00:06:46.196 "name": "BaseBdev2", 00:06:46.196 "uuid": "bc32b157-75fc-40a1-bbf1-356b7b6ddbb0", 00:06:46.196 "is_configured": true, 00:06:46.196 "data_offset": 0, 00:06:46.196 "data_size": 65536 00:06:46.196 } 00:06:46.196 ] 00:06:46.196 }' 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:46.196 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.765 [2024-12-14 04:55:57.508201] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:46.765 [2024-12-14 04:55:57.508257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72148 00:06:46.765 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 72148 ']' 00:06:46.766 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 72148 00:06:46.766 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:06:46.766 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:46.766 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72148 00:06:46.766 killing process with pid 72148 00:06:46.766 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:46.766 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:46.766 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72148' 00:06:46.766 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 72148 00:06:46.766 [2024-12-14 04:55:57.613602] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:46.766 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 72148 00:06:46.766 [2024-12-14 04:55:57.614553] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:47.025 ************************************ 00:06:47.025 END TEST raid_state_function_test 00:06:47.025 ************************************ 00:06:47.025 04:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:47.025 00:06:47.025 real 0m3.759s 00:06:47.025 user 0m5.892s 00:06:47.025 sys 0m0.735s 00:06:47.025 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.025 04:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.025 04:55:57 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:47.025 04:55:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:47.025 04:55:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.025 04:55:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:47.285 ************************************ 00:06:47.285 START TEST raid_state_function_test_sb 00:06:47.285 ************************************ 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72385 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72385' 00:06:47.285 Process raid pid: 72385 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72385 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72385 ']' 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.285 04:55:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:47.285 [2024-12-14 04:55:58.008215] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:47.285 [2024-12-14 04:55:58.008432] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:47.544 [2024-12-14 04:55:58.168205] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.544 [2024-12-14 04:55:58.213177] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.544 [2024-12-14 04:55:58.254923] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.545 [2024-12-14 04:55:58.254958] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.113 [2024-12-14 04:55:58.836215] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:48.113 [2024-12-14 04:55:58.836327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:48.113 [2024-12-14 04:55:58.836343] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:48.113 [2024-12-14 04:55:58.836353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:48.113 "name": "Existed_Raid", 00:06:48.113 "uuid": "e484ab41-de09-43d0-9ae7-748758e4894c", 00:06:48.113 "strip_size_kb": 64, 00:06:48.113 "state": "configuring", 00:06:48.113 "raid_level": "raid0", 00:06:48.113 "superblock": true, 00:06:48.113 "num_base_bdevs": 2, 00:06:48.113 "num_base_bdevs_discovered": 0, 00:06:48.113 "num_base_bdevs_operational": 2, 00:06:48.113 "base_bdevs_list": [ 00:06:48.113 { 00:06:48.113 "name": "BaseBdev1", 00:06:48.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:48.113 "is_configured": false, 00:06:48.113 "data_offset": 0, 00:06:48.113 "data_size": 0 00:06:48.113 }, 00:06:48.113 { 00:06:48.113 "name": "BaseBdev2", 00:06:48.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:48.113 "is_configured": false, 00:06:48.113 "data_offset": 0, 00:06:48.113 "data_size": 0 00:06:48.113 } 00:06:48.113 ] 00:06:48.113 }' 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:48.113 04:55:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.372 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:48.372 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.631 [2024-12-14 04:55:59.259380] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:48.631 [2024-12-14 04:55:59.259471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.631 [2024-12-14 04:55:59.271393] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:48.631 [2024-12-14 04:55:59.271463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:48.631 [2024-12-14 04:55:59.271489] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:48.631 [2024-12-14 04:55:59.271510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.631 [2024-12-14 04:55:59.291933] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:48.631 BaseBdev1 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.631 [ 00:06:48.631 { 00:06:48.631 "name": "BaseBdev1", 00:06:48.631 "aliases": [ 00:06:48.631 "5700f451-14e8-4a00-9c85-e0c2ed78d5d0" 00:06:48.631 ], 00:06:48.631 "product_name": "Malloc disk", 00:06:48.631 "block_size": 512, 00:06:48.631 "num_blocks": 65536, 00:06:48.631 "uuid": "5700f451-14e8-4a00-9c85-e0c2ed78d5d0", 00:06:48.631 "assigned_rate_limits": { 00:06:48.631 "rw_ios_per_sec": 0, 00:06:48.631 "rw_mbytes_per_sec": 0, 00:06:48.631 "r_mbytes_per_sec": 0, 00:06:48.631 "w_mbytes_per_sec": 0 00:06:48.631 }, 00:06:48.631 "claimed": true, 00:06:48.631 "claim_type": "exclusive_write", 00:06:48.631 "zoned": false, 00:06:48.631 "supported_io_types": { 00:06:48.631 "read": true, 00:06:48.631 "write": true, 00:06:48.631 "unmap": true, 00:06:48.631 "flush": true, 00:06:48.631 "reset": true, 00:06:48.631 "nvme_admin": false, 00:06:48.631 "nvme_io": false, 00:06:48.631 "nvme_io_md": false, 00:06:48.631 "write_zeroes": true, 00:06:48.631 "zcopy": true, 00:06:48.631 "get_zone_info": false, 00:06:48.631 "zone_management": false, 00:06:48.631 "zone_append": false, 00:06:48.631 "compare": false, 00:06:48.631 "compare_and_write": false, 00:06:48.631 "abort": true, 00:06:48.631 "seek_hole": false, 00:06:48.631 "seek_data": false, 00:06:48.631 "copy": true, 00:06:48.631 "nvme_iov_md": false 00:06:48.631 }, 00:06:48.631 "memory_domains": [ 00:06:48.631 { 00:06:48.631 "dma_device_id": "system", 00:06:48.631 "dma_device_type": 1 00:06:48.631 }, 00:06:48.631 { 00:06:48.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.631 "dma_device_type": 2 00:06:48.631 } 00:06:48.631 ], 00:06:48.631 "driver_specific": {} 00:06:48.631 } 00:06:48.631 ] 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:48.631 "name": "Existed_Raid", 00:06:48.631 "uuid": "1946abcd-4348-4e1e-a4cc-bb74a08046a4", 00:06:48.631 "strip_size_kb": 64, 00:06:48.631 "state": "configuring", 00:06:48.631 "raid_level": "raid0", 00:06:48.631 "superblock": true, 00:06:48.631 "num_base_bdevs": 2, 00:06:48.631 "num_base_bdevs_discovered": 1, 00:06:48.631 "num_base_bdevs_operational": 2, 00:06:48.631 "base_bdevs_list": [ 00:06:48.631 { 00:06:48.631 "name": "BaseBdev1", 00:06:48.631 "uuid": "5700f451-14e8-4a00-9c85-e0c2ed78d5d0", 00:06:48.631 "is_configured": true, 00:06:48.631 "data_offset": 2048, 00:06:48.631 "data_size": 63488 00:06:48.631 }, 00:06:48.631 { 00:06:48.631 "name": "BaseBdev2", 00:06:48.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:48.631 "is_configured": false, 00:06:48.631 "data_offset": 0, 00:06:48.631 "data_size": 0 00:06:48.631 } 00:06:48.631 ] 00:06:48.631 }' 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:48.631 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.890 [2024-12-14 04:55:59.719256] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:48.890 [2024-12-14 04:55:59.719300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.890 [2024-12-14 04:55:59.731280] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:48.890 [2024-12-14 04:55:59.733056] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:48.890 [2024-12-14 04:55:59.733093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:48.890 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.149 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:49.149 "name": "Existed_Raid", 00:06:49.149 "uuid": "1e8556e6-2f45-4339-9ae5-3596e72bcd20", 00:06:49.149 "strip_size_kb": 64, 00:06:49.149 "state": "configuring", 00:06:49.149 "raid_level": "raid0", 00:06:49.149 "superblock": true, 00:06:49.149 "num_base_bdevs": 2, 00:06:49.149 "num_base_bdevs_discovered": 1, 00:06:49.149 "num_base_bdevs_operational": 2, 00:06:49.149 "base_bdevs_list": [ 00:06:49.149 { 00:06:49.149 "name": "BaseBdev1", 00:06:49.149 "uuid": "5700f451-14e8-4a00-9c85-e0c2ed78d5d0", 00:06:49.149 "is_configured": true, 00:06:49.149 "data_offset": 2048, 00:06:49.149 "data_size": 63488 00:06:49.149 }, 00:06:49.149 { 00:06:49.149 "name": "BaseBdev2", 00:06:49.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:49.149 "is_configured": false, 00:06:49.149 "data_offset": 0, 00:06:49.149 "data_size": 0 00:06:49.149 } 00:06:49.149 ] 00:06:49.149 }' 00:06:49.149 04:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:49.150 04:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:49.409 [2024-12-14 04:56:00.201394] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:49.409 [2024-12-14 04:56:00.202049] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:06:49.409 [2024-12-14 04:56:00.202246] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:49.409 BaseBdev2 00:06:49.409 [2024-12-14 04:56:00.203239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.409 [2024-12-14 04:56:00.203728] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:06:49.409 [2024-12-14 04:56:00.203887] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:49.409 [2024-12-14 04:56:00.204377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:49.409 [ 00:06:49.409 { 00:06:49.409 "name": "BaseBdev2", 00:06:49.409 "aliases": [ 00:06:49.409 "b96143dd-10c1-44dd-928b-47c9847b920b" 00:06:49.409 ], 00:06:49.409 "product_name": "Malloc disk", 00:06:49.409 "block_size": 512, 00:06:49.409 "num_blocks": 65536, 00:06:49.409 "uuid": "b96143dd-10c1-44dd-928b-47c9847b920b", 00:06:49.409 "assigned_rate_limits": { 00:06:49.409 "rw_ios_per_sec": 0, 00:06:49.409 "rw_mbytes_per_sec": 0, 00:06:49.409 "r_mbytes_per_sec": 0, 00:06:49.409 "w_mbytes_per_sec": 0 00:06:49.409 }, 00:06:49.409 "claimed": true, 00:06:49.409 "claim_type": "exclusive_write", 00:06:49.409 "zoned": false, 00:06:49.409 "supported_io_types": { 00:06:49.409 "read": true, 00:06:49.409 "write": true, 00:06:49.409 "unmap": true, 00:06:49.409 "flush": true, 00:06:49.409 "reset": true, 00:06:49.409 "nvme_admin": false, 00:06:49.409 "nvme_io": false, 00:06:49.409 "nvme_io_md": false, 00:06:49.409 "write_zeroes": true, 00:06:49.409 "zcopy": true, 00:06:49.409 "get_zone_info": false, 00:06:49.409 "zone_management": false, 00:06:49.409 "zone_append": false, 00:06:49.409 "compare": false, 00:06:49.409 "compare_and_write": false, 00:06:49.409 "abort": true, 00:06:49.409 "seek_hole": false, 00:06:49.409 "seek_data": false, 00:06:49.409 "copy": true, 00:06:49.409 "nvme_iov_md": false 00:06:49.409 }, 00:06:49.409 "memory_domains": [ 00:06:49.409 { 00:06:49.409 "dma_device_id": "system", 00:06:49.409 "dma_device_type": 1 00:06:49.409 }, 00:06:49.409 { 00:06:49.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.409 "dma_device_type": 2 00:06:49.409 } 00:06:49.409 ], 00:06:49.409 "driver_specific": {} 00:06:49.409 } 00:06:49.409 ] 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:49.409 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.669 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:49.669 "name": "Existed_Raid", 00:06:49.669 "uuid": "1e8556e6-2f45-4339-9ae5-3596e72bcd20", 00:06:49.669 "strip_size_kb": 64, 00:06:49.669 "state": "online", 00:06:49.669 "raid_level": "raid0", 00:06:49.669 "superblock": true, 00:06:49.669 "num_base_bdevs": 2, 00:06:49.669 "num_base_bdevs_discovered": 2, 00:06:49.669 "num_base_bdevs_operational": 2, 00:06:49.669 "base_bdevs_list": [ 00:06:49.669 { 00:06:49.669 "name": "BaseBdev1", 00:06:49.669 "uuid": "5700f451-14e8-4a00-9c85-e0c2ed78d5d0", 00:06:49.669 "is_configured": true, 00:06:49.669 "data_offset": 2048, 00:06:49.669 "data_size": 63488 00:06:49.669 }, 00:06:49.669 { 00:06:49.669 "name": "BaseBdev2", 00:06:49.669 "uuid": "b96143dd-10c1-44dd-928b-47c9847b920b", 00:06:49.669 "is_configured": true, 00:06:49.669 "data_offset": 2048, 00:06:49.669 "data_size": 63488 00:06:49.669 } 00:06:49.669 ] 00:06:49.669 }' 00:06:49.669 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:49.669 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:49.928 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:49.928 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:49.928 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:49.928 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:49.928 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:49.928 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:49.928 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:49.928 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.928 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:49.928 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:49.928 [2024-12-14 04:56:00.620914] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:49.928 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.928 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:49.928 "name": "Existed_Raid", 00:06:49.928 "aliases": [ 00:06:49.928 "1e8556e6-2f45-4339-9ae5-3596e72bcd20" 00:06:49.928 ], 00:06:49.928 "product_name": "Raid Volume", 00:06:49.928 "block_size": 512, 00:06:49.928 "num_blocks": 126976, 00:06:49.928 "uuid": "1e8556e6-2f45-4339-9ae5-3596e72bcd20", 00:06:49.928 "assigned_rate_limits": { 00:06:49.928 "rw_ios_per_sec": 0, 00:06:49.928 "rw_mbytes_per_sec": 0, 00:06:49.928 "r_mbytes_per_sec": 0, 00:06:49.928 "w_mbytes_per_sec": 0 00:06:49.928 }, 00:06:49.928 "claimed": false, 00:06:49.928 "zoned": false, 00:06:49.928 "supported_io_types": { 00:06:49.928 "read": true, 00:06:49.928 "write": true, 00:06:49.928 "unmap": true, 00:06:49.928 "flush": true, 00:06:49.928 "reset": true, 00:06:49.928 "nvme_admin": false, 00:06:49.928 "nvme_io": false, 00:06:49.928 "nvme_io_md": false, 00:06:49.928 "write_zeroes": true, 00:06:49.928 "zcopy": false, 00:06:49.928 "get_zone_info": false, 00:06:49.928 "zone_management": false, 00:06:49.928 "zone_append": false, 00:06:49.928 "compare": false, 00:06:49.928 "compare_and_write": false, 00:06:49.928 "abort": false, 00:06:49.928 "seek_hole": false, 00:06:49.928 "seek_data": false, 00:06:49.928 "copy": false, 00:06:49.928 "nvme_iov_md": false 00:06:49.928 }, 00:06:49.928 "memory_domains": [ 00:06:49.928 { 00:06:49.928 "dma_device_id": "system", 00:06:49.928 "dma_device_type": 1 00:06:49.928 }, 00:06:49.928 { 00:06:49.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.928 "dma_device_type": 2 00:06:49.929 }, 00:06:49.929 { 00:06:49.929 "dma_device_id": "system", 00:06:49.929 "dma_device_type": 1 00:06:49.929 }, 00:06:49.929 { 00:06:49.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.929 "dma_device_type": 2 00:06:49.929 } 00:06:49.929 ], 00:06:49.929 "driver_specific": { 00:06:49.929 "raid": { 00:06:49.929 "uuid": "1e8556e6-2f45-4339-9ae5-3596e72bcd20", 00:06:49.929 "strip_size_kb": 64, 00:06:49.929 "state": "online", 00:06:49.929 "raid_level": "raid0", 00:06:49.929 "superblock": true, 00:06:49.929 "num_base_bdevs": 2, 00:06:49.929 "num_base_bdevs_discovered": 2, 00:06:49.929 "num_base_bdevs_operational": 2, 00:06:49.929 "base_bdevs_list": [ 00:06:49.929 { 00:06:49.929 "name": "BaseBdev1", 00:06:49.929 "uuid": "5700f451-14e8-4a00-9c85-e0c2ed78d5d0", 00:06:49.929 "is_configured": true, 00:06:49.929 "data_offset": 2048, 00:06:49.929 "data_size": 63488 00:06:49.929 }, 00:06:49.929 { 00:06:49.929 "name": "BaseBdev2", 00:06:49.929 "uuid": "b96143dd-10c1-44dd-928b-47c9847b920b", 00:06:49.929 "is_configured": true, 00:06:49.929 "data_offset": 2048, 00:06:49.929 "data_size": 63488 00:06:49.929 } 00:06:49.929 ] 00:06:49.929 } 00:06:49.929 } 00:06:49.929 }' 00:06:49.929 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:49.929 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:49.929 BaseBdev2' 00:06:49.929 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:49.929 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:49.929 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:49.929 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:49.929 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:49.929 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.929 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:49.929 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.192 [2024-12-14 04:56:00.868273] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:50.192 [2024-12-14 04:56:00.868310] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:50.192 [2024-12-14 04:56:00.868358] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:50.192 "name": "Existed_Raid", 00:06:50.192 "uuid": "1e8556e6-2f45-4339-9ae5-3596e72bcd20", 00:06:50.192 "strip_size_kb": 64, 00:06:50.192 "state": "offline", 00:06:50.192 "raid_level": "raid0", 00:06:50.192 "superblock": true, 00:06:50.192 "num_base_bdevs": 2, 00:06:50.192 "num_base_bdevs_discovered": 1, 00:06:50.192 "num_base_bdevs_operational": 1, 00:06:50.192 "base_bdevs_list": [ 00:06:50.192 { 00:06:50.192 "name": null, 00:06:50.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:50.192 "is_configured": false, 00:06:50.192 "data_offset": 0, 00:06:50.192 "data_size": 63488 00:06:50.192 }, 00:06:50.192 { 00:06:50.192 "name": "BaseBdev2", 00:06:50.192 "uuid": "b96143dd-10c1-44dd-928b-47c9847b920b", 00:06:50.192 "is_configured": true, 00:06:50.192 "data_offset": 2048, 00:06:50.192 "data_size": 63488 00:06:50.192 } 00:06:50.192 ] 00:06:50.192 }' 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:50.192 04:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.465 04:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:50.465 04:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:50.465 04:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:50.465 04:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:50.465 04:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.465 04:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.465 04:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.465 04:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:50.465 04:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:50.465 04:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:50.465 04:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.465 04:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.744 [2024-12-14 04:56:01.342904] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:50.744 [2024-12-14 04:56:01.342983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:06:50.744 04:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.744 04:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:50.744 04:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:50.744 04:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:50.744 04:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:50.744 04:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.745 04:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.745 04:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.745 04:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:50.745 04:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:50.745 04:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:50.745 04:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72385 00:06:50.745 04:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72385 ']' 00:06:50.745 04:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72385 00:06:50.745 04:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:06:50.745 04:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.745 04:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72385 00:06:50.745 killing process with pid 72385 00:06:50.745 04:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.745 04:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.745 04:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72385' 00:06:50.745 04:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72385 00:06:50.745 [2024-12-14 04:56:01.433087] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:50.745 04:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72385 00:06:50.745 [2024-12-14 04:56:01.434038] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:51.004 04:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:51.004 00:06:51.004 real 0m3.761s 00:06:51.004 user 0m5.895s 00:06:51.004 sys 0m0.716s 00:06:51.004 04:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.004 ************************************ 00:06:51.004 END TEST raid_state_function_test_sb 00:06:51.004 ************************************ 00:06:51.004 04:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.004 04:56:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:06:51.004 04:56:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:51.004 04:56:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.004 04:56:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:51.004 ************************************ 00:06:51.004 START TEST raid_superblock_test 00:06:51.004 ************************************ 00:06:51.004 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:06:51.004 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:06:51.004 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:06:51.004 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:51.004 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:51.004 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:51.004 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:51.004 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:51.004 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:51.004 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:51.004 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:51.004 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:51.004 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:51.004 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:51.004 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:06:51.004 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:06:51.004 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:06:51.004 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72626 00:06:51.004 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:51.004 04:56:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72626 00:06:51.004 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72626 ']' 00:06:51.005 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.005 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.005 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.005 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.005 04:56:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.005 [2024-12-14 04:56:01.834585] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:51.005 [2024-12-14 04:56:01.834725] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72626 ] 00:06:51.264 [2024-12-14 04:56:01.992371] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.264 [2024-12-14 04:56:02.037148] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.264 [2024-12-14 04:56:02.078925] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.264 [2024-12-14 04:56:02.078960] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.834 malloc1 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.834 [2024-12-14 04:56:02.676974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:51.834 [2024-12-14 04:56:02.677055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:51.834 [2024-12-14 04:56:02.677076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:51.834 [2024-12-14 04:56:02.677092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:51.834 [2024-12-14 04:56:02.679128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:51.834 [2024-12-14 04:56:02.679194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:51.834 pt1 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.834 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.834 malloc2 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.094 [2024-12-14 04:56:02.721964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:52.094 [2024-12-14 04:56:02.722112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:52.094 [2024-12-14 04:56:02.722200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:06:52.094 [2024-12-14 04:56:02.722239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:52.094 [2024-12-14 04:56:02.726825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:52.094 [2024-12-14 04:56:02.726895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:52.094 pt2 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.094 [2024-12-14 04:56:02.735178] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:52.094 [2024-12-14 04:56:02.737864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:52.094 [2024-12-14 04:56:02.738082] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:52.094 [2024-12-14 04:56:02.738105] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:52.094 [2024-12-14 04:56:02.738502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:52.094 [2024-12-14 04:56:02.738692] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:52.094 [2024-12-14 04:56:02.738715] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:06:52.094 [2024-12-14 04:56:02.738900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:52.094 "name": "raid_bdev1", 00:06:52.094 "uuid": "64888f77-2086-4c03-b48f-e6f768fbc7cb", 00:06:52.094 "strip_size_kb": 64, 00:06:52.094 "state": "online", 00:06:52.094 "raid_level": "raid0", 00:06:52.094 "superblock": true, 00:06:52.094 "num_base_bdevs": 2, 00:06:52.094 "num_base_bdevs_discovered": 2, 00:06:52.094 "num_base_bdevs_operational": 2, 00:06:52.094 "base_bdevs_list": [ 00:06:52.094 { 00:06:52.094 "name": "pt1", 00:06:52.094 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:52.094 "is_configured": true, 00:06:52.094 "data_offset": 2048, 00:06:52.094 "data_size": 63488 00:06:52.094 }, 00:06:52.094 { 00:06:52.094 "name": "pt2", 00:06:52.094 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:52.094 "is_configured": true, 00:06:52.094 "data_offset": 2048, 00:06:52.094 "data_size": 63488 00:06:52.094 } 00:06:52.094 ] 00:06:52.094 }' 00:06:52.094 04:56:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:52.095 04:56:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.354 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:06:52.354 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:52.354 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:52.354 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:52.354 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:52.354 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:52.354 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:52.354 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.354 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.354 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:52.354 [2024-12-14 04:56:03.102685] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.354 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.354 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:52.354 "name": "raid_bdev1", 00:06:52.354 "aliases": [ 00:06:52.354 "64888f77-2086-4c03-b48f-e6f768fbc7cb" 00:06:52.354 ], 00:06:52.354 "product_name": "Raid Volume", 00:06:52.354 "block_size": 512, 00:06:52.354 "num_blocks": 126976, 00:06:52.354 "uuid": "64888f77-2086-4c03-b48f-e6f768fbc7cb", 00:06:52.354 "assigned_rate_limits": { 00:06:52.354 "rw_ios_per_sec": 0, 00:06:52.354 "rw_mbytes_per_sec": 0, 00:06:52.354 "r_mbytes_per_sec": 0, 00:06:52.354 "w_mbytes_per_sec": 0 00:06:52.354 }, 00:06:52.354 "claimed": false, 00:06:52.354 "zoned": false, 00:06:52.354 "supported_io_types": { 00:06:52.354 "read": true, 00:06:52.354 "write": true, 00:06:52.354 "unmap": true, 00:06:52.354 "flush": true, 00:06:52.354 "reset": true, 00:06:52.354 "nvme_admin": false, 00:06:52.354 "nvme_io": false, 00:06:52.354 "nvme_io_md": false, 00:06:52.354 "write_zeroes": true, 00:06:52.354 "zcopy": false, 00:06:52.354 "get_zone_info": false, 00:06:52.354 "zone_management": false, 00:06:52.354 "zone_append": false, 00:06:52.354 "compare": false, 00:06:52.354 "compare_and_write": false, 00:06:52.354 "abort": false, 00:06:52.354 "seek_hole": false, 00:06:52.354 "seek_data": false, 00:06:52.354 "copy": false, 00:06:52.354 "nvme_iov_md": false 00:06:52.354 }, 00:06:52.354 "memory_domains": [ 00:06:52.354 { 00:06:52.354 "dma_device_id": "system", 00:06:52.354 "dma_device_type": 1 00:06:52.354 }, 00:06:52.354 { 00:06:52.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.354 "dma_device_type": 2 00:06:52.354 }, 00:06:52.354 { 00:06:52.354 "dma_device_id": "system", 00:06:52.354 "dma_device_type": 1 00:06:52.354 }, 00:06:52.354 { 00:06:52.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.354 "dma_device_type": 2 00:06:52.354 } 00:06:52.354 ], 00:06:52.354 "driver_specific": { 00:06:52.354 "raid": { 00:06:52.354 "uuid": "64888f77-2086-4c03-b48f-e6f768fbc7cb", 00:06:52.354 "strip_size_kb": 64, 00:06:52.354 "state": "online", 00:06:52.354 "raid_level": "raid0", 00:06:52.354 "superblock": true, 00:06:52.354 "num_base_bdevs": 2, 00:06:52.354 "num_base_bdevs_discovered": 2, 00:06:52.354 "num_base_bdevs_operational": 2, 00:06:52.354 "base_bdevs_list": [ 00:06:52.354 { 00:06:52.354 "name": "pt1", 00:06:52.354 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:52.354 "is_configured": true, 00:06:52.354 "data_offset": 2048, 00:06:52.354 "data_size": 63488 00:06:52.354 }, 00:06:52.354 { 00:06:52.354 "name": "pt2", 00:06:52.354 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:52.354 "is_configured": true, 00:06:52.354 "data_offset": 2048, 00:06:52.354 "data_size": 63488 00:06:52.354 } 00:06:52.354 ] 00:06:52.354 } 00:06:52.354 } 00:06:52.354 }' 00:06:52.354 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:52.354 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:52.354 pt2' 00:06:52.355 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:52.355 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:52.355 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:52.355 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:52.355 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:52.355 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.355 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.614 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.614 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:52.614 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:52.614 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:52.614 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:52.614 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:52.614 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.614 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.614 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.614 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:52.614 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:52.614 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:52.614 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:06:52.614 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.614 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.614 [2024-12-14 04:56:03.338262] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.614 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.614 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=64888f77-2086-4c03-b48f-e6f768fbc7cb 00:06:52.614 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 64888f77-2086-4c03-b48f-e6f768fbc7cb ']' 00:06:52.614 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:52.614 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.614 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.614 [2024-12-14 04:56:03.385963] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:52.614 [2024-12-14 04:56:03.386028] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:52.614 [2024-12-14 04:56:03.386112] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.614 [2024-12-14 04:56:03.386185] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.614 [2024-12-14 04:56:03.386243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.615 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.875 [2024-12-14 04:56:03.521756] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:52.875 [2024-12-14 04:56:03.523554] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:52.875 [2024-12-14 04:56:03.523657] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:52.875 [2024-12-14 04:56:03.523739] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:52.875 [2024-12-14 04:56:03.523795] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:52.875 [2024-12-14 04:56:03.523825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:06:52.875 request: 00:06:52.875 { 00:06:52.875 "name": "raid_bdev1", 00:06:52.875 "raid_level": "raid0", 00:06:52.875 "base_bdevs": [ 00:06:52.875 "malloc1", 00:06:52.875 "malloc2" 00:06:52.875 ], 00:06:52.875 "strip_size_kb": 64, 00:06:52.875 "superblock": false, 00:06:52.875 "method": "bdev_raid_create", 00:06:52.875 "req_id": 1 00:06:52.875 } 00:06:52.875 Got JSON-RPC error response 00:06:52.875 response: 00:06:52.875 { 00:06:52.875 "code": -17, 00:06:52.875 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:52.875 } 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.875 [2024-12-14 04:56:03.589603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:52.875 [2024-12-14 04:56:03.589699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:52.875 [2024-12-14 04:56:03.589719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:52.875 [2024-12-14 04:56:03.589727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:52.875 [2024-12-14 04:56:03.591725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:52.875 [2024-12-14 04:56:03.591759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:52.875 [2024-12-14 04:56:03.591817] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:52.875 [2024-12-14 04:56:03.591858] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:52.875 pt1 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:52.875 "name": "raid_bdev1", 00:06:52.875 "uuid": "64888f77-2086-4c03-b48f-e6f768fbc7cb", 00:06:52.875 "strip_size_kb": 64, 00:06:52.875 "state": "configuring", 00:06:52.875 "raid_level": "raid0", 00:06:52.875 "superblock": true, 00:06:52.875 "num_base_bdevs": 2, 00:06:52.875 "num_base_bdevs_discovered": 1, 00:06:52.875 "num_base_bdevs_operational": 2, 00:06:52.875 "base_bdevs_list": [ 00:06:52.875 { 00:06:52.875 "name": "pt1", 00:06:52.875 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:52.875 "is_configured": true, 00:06:52.875 "data_offset": 2048, 00:06:52.875 "data_size": 63488 00:06:52.875 }, 00:06:52.875 { 00:06:52.875 "name": null, 00:06:52.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:52.875 "is_configured": false, 00:06:52.875 "data_offset": 2048, 00:06:52.875 "data_size": 63488 00:06:52.875 } 00:06:52.875 ] 00:06:52.875 }' 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:52.875 04:56:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.444 [2024-12-14 04:56:04.028837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:53.444 [2024-12-14 04:56:04.028952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:53.444 [2024-12-14 04:56:04.028989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:06:53.444 [2024-12-14 04:56:04.029016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:53.444 [2024-12-14 04:56:04.029413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:53.444 [2024-12-14 04:56:04.029464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:53.444 [2024-12-14 04:56:04.029551] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:53.444 [2024-12-14 04:56:04.029597] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:53.444 [2024-12-14 04:56:04.029695] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:06:53.444 [2024-12-14 04:56:04.029728] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:53.444 [2024-12-14 04:56:04.029965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:06:53.444 [2024-12-14 04:56:04.030104] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:06:53.444 [2024-12-14 04:56:04.030149] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:06:53.444 [2024-12-14 04:56:04.030289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:53.444 pt2 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:53.444 "name": "raid_bdev1", 00:06:53.444 "uuid": "64888f77-2086-4c03-b48f-e6f768fbc7cb", 00:06:53.444 "strip_size_kb": 64, 00:06:53.444 "state": "online", 00:06:53.444 "raid_level": "raid0", 00:06:53.444 "superblock": true, 00:06:53.444 "num_base_bdevs": 2, 00:06:53.444 "num_base_bdevs_discovered": 2, 00:06:53.444 "num_base_bdevs_operational": 2, 00:06:53.444 "base_bdevs_list": [ 00:06:53.444 { 00:06:53.444 "name": "pt1", 00:06:53.444 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:53.444 "is_configured": true, 00:06:53.444 "data_offset": 2048, 00:06:53.444 "data_size": 63488 00:06:53.444 }, 00:06:53.444 { 00:06:53.444 "name": "pt2", 00:06:53.444 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:53.444 "is_configured": true, 00:06:53.444 "data_offset": 2048, 00:06:53.444 "data_size": 63488 00:06:53.444 } 00:06:53.444 ] 00:06:53.444 }' 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:53.444 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.704 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:06:53.704 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:53.704 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:53.704 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:53.704 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:53.704 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:53.704 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:53.704 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:53.704 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.704 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.704 [2024-12-14 04:56:04.424447] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:53.704 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.704 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:53.704 "name": "raid_bdev1", 00:06:53.704 "aliases": [ 00:06:53.704 "64888f77-2086-4c03-b48f-e6f768fbc7cb" 00:06:53.704 ], 00:06:53.704 "product_name": "Raid Volume", 00:06:53.704 "block_size": 512, 00:06:53.704 "num_blocks": 126976, 00:06:53.704 "uuid": "64888f77-2086-4c03-b48f-e6f768fbc7cb", 00:06:53.704 "assigned_rate_limits": { 00:06:53.704 "rw_ios_per_sec": 0, 00:06:53.704 "rw_mbytes_per_sec": 0, 00:06:53.704 "r_mbytes_per_sec": 0, 00:06:53.704 "w_mbytes_per_sec": 0 00:06:53.704 }, 00:06:53.704 "claimed": false, 00:06:53.704 "zoned": false, 00:06:53.704 "supported_io_types": { 00:06:53.704 "read": true, 00:06:53.704 "write": true, 00:06:53.704 "unmap": true, 00:06:53.704 "flush": true, 00:06:53.704 "reset": true, 00:06:53.704 "nvme_admin": false, 00:06:53.704 "nvme_io": false, 00:06:53.704 "nvme_io_md": false, 00:06:53.704 "write_zeroes": true, 00:06:53.704 "zcopy": false, 00:06:53.704 "get_zone_info": false, 00:06:53.704 "zone_management": false, 00:06:53.704 "zone_append": false, 00:06:53.704 "compare": false, 00:06:53.704 "compare_and_write": false, 00:06:53.704 "abort": false, 00:06:53.704 "seek_hole": false, 00:06:53.704 "seek_data": false, 00:06:53.704 "copy": false, 00:06:53.704 "nvme_iov_md": false 00:06:53.704 }, 00:06:53.704 "memory_domains": [ 00:06:53.704 { 00:06:53.704 "dma_device_id": "system", 00:06:53.704 "dma_device_type": 1 00:06:53.704 }, 00:06:53.704 { 00:06:53.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.704 "dma_device_type": 2 00:06:53.704 }, 00:06:53.704 { 00:06:53.704 "dma_device_id": "system", 00:06:53.704 "dma_device_type": 1 00:06:53.704 }, 00:06:53.704 { 00:06:53.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.704 "dma_device_type": 2 00:06:53.704 } 00:06:53.704 ], 00:06:53.704 "driver_specific": { 00:06:53.704 "raid": { 00:06:53.704 "uuid": "64888f77-2086-4c03-b48f-e6f768fbc7cb", 00:06:53.704 "strip_size_kb": 64, 00:06:53.704 "state": "online", 00:06:53.704 "raid_level": "raid0", 00:06:53.704 "superblock": true, 00:06:53.704 "num_base_bdevs": 2, 00:06:53.704 "num_base_bdevs_discovered": 2, 00:06:53.704 "num_base_bdevs_operational": 2, 00:06:53.704 "base_bdevs_list": [ 00:06:53.704 { 00:06:53.704 "name": "pt1", 00:06:53.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:53.704 "is_configured": true, 00:06:53.704 "data_offset": 2048, 00:06:53.704 "data_size": 63488 00:06:53.704 }, 00:06:53.704 { 00:06:53.704 "name": "pt2", 00:06:53.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:53.704 "is_configured": true, 00:06:53.704 "data_offset": 2048, 00:06:53.704 "data_size": 63488 00:06:53.704 } 00:06:53.704 ] 00:06:53.704 } 00:06:53.704 } 00:06:53.704 }' 00:06:53.704 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:53.704 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:53.704 pt2' 00:06:53.704 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:53.704 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:53.704 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:53.704 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:53.704 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:53.704 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.704 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.704 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.964 [2024-12-14 04:56:04.644019] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 64888f77-2086-4c03-b48f-e6f768fbc7cb '!=' 64888f77-2086-4c03-b48f-e6f768fbc7cb ']' 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72626 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72626 ']' 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72626 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72626 00:06:53.964 killing process with pid 72626 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72626' 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72626 00:06:53.964 [2024-12-14 04:56:04.729818] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:53.964 [2024-12-14 04:56:04.729901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:53.964 [2024-12-14 04:56:04.729950] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:53.964 [2024-12-14 04:56:04.729959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:06:53.964 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72626 00:06:53.964 [2024-12-14 04:56:04.752496] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:54.224 04:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:06:54.224 00:06:54.224 real 0m3.247s 00:06:54.224 user 0m4.984s 00:06:54.224 sys 0m0.679s 00:06:54.224 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.224 04:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.224 ************************************ 00:06:54.224 END TEST raid_superblock_test 00:06:54.224 ************************************ 00:06:54.224 04:56:05 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:06:54.224 04:56:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:54.224 04:56:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.224 04:56:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:54.224 ************************************ 00:06:54.224 START TEST raid_read_error_test 00:06:54.224 ************************************ 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nv2ti3N3v9 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72821 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72821 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72821 ']' 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.224 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.486 [2024-12-14 04:56:05.162407] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:54.486 [2024-12-14 04:56:05.162535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72821 ] 00:06:54.486 [2024-12-14 04:56:05.322889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.746 [2024-12-14 04:56:05.369079] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.746 [2024-12-14 04:56:05.411334] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.746 [2024-12-14 04:56:05.411380] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.314 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.314 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:06:55.314 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:55.314 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:55.314 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.314 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.314 BaseBdev1_malloc 00:06:55.314 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.314 04:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:55.314 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.314 04:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.314 true 00:06:55.314 04:56:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.314 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:55.314 04:56:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.314 04:56:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.314 [2024-12-14 04:56:06.009628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:55.314 [2024-12-14 04:56:06.009705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:55.314 [2024-12-14 04:56:06.009725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:55.314 [2024-12-14 04:56:06.009734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:55.314 [2024-12-14 04:56:06.011746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:55.314 [2024-12-14 04:56:06.011782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:55.314 BaseBdev1 00:06:55.314 04:56:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.314 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:55.314 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:55.314 04:56:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.314 04:56:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.314 BaseBdev2_malloc 00:06:55.314 04:56:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.314 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:55.314 04:56:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.315 true 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.315 [2024-12-14 04:56:06.059051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:55.315 [2024-12-14 04:56:06.059115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:55.315 [2024-12-14 04:56:06.059147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:55.315 [2024-12-14 04:56:06.059156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:55.315 [2024-12-14 04:56:06.061104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:55.315 [2024-12-14 04:56:06.061136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:55.315 BaseBdev2 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.315 [2024-12-14 04:56:06.071074] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:55.315 [2024-12-14 04:56:06.072858] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:55.315 [2024-12-14 04:56:06.073019] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:06:55.315 [2024-12-14 04:56:06.073032] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:55.315 [2024-12-14 04:56:06.073284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:55.315 [2024-12-14 04:56:06.073417] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:06:55.315 [2024-12-14 04:56:06.073443] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:06:55.315 [2024-12-14 04:56:06.073570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.315 "name": "raid_bdev1", 00:06:55.315 "uuid": "8eae8fbf-aa17-45eb-8561-3ef251f7a9d7", 00:06:55.315 "strip_size_kb": 64, 00:06:55.315 "state": "online", 00:06:55.315 "raid_level": "raid0", 00:06:55.315 "superblock": true, 00:06:55.315 "num_base_bdevs": 2, 00:06:55.315 "num_base_bdevs_discovered": 2, 00:06:55.315 "num_base_bdevs_operational": 2, 00:06:55.315 "base_bdevs_list": [ 00:06:55.315 { 00:06:55.315 "name": "BaseBdev1", 00:06:55.315 "uuid": "d9eab4fd-6715-50d4-a731-7e84e5482d09", 00:06:55.315 "is_configured": true, 00:06:55.315 "data_offset": 2048, 00:06:55.315 "data_size": 63488 00:06:55.315 }, 00:06:55.315 { 00:06:55.315 "name": "BaseBdev2", 00:06:55.315 "uuid": "d1ce0a47-4cc5-54eb-ad49-ead0fe239f4f", 00:06:55.315 "is_configured": true, 00:06:55.315 "data_offset": 2048, 00:06:55.315 "data_size": 63488 00:06:55.315 } 00:06:55.315 ] 00:06:55.315 }' 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.315 04:56:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.883 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:55.883 04:56:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:55.883 [2024-12-14 04:56:06.582580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.822 "name": "raid_bdev1", 00:06:56.822 "uuid": "8eae8fbf-aa17-45eb-8561-3ef251f7a9d7", 00:06:56.822 "strip_size_kb": 64, 00:06:56.822 "state": "online", 00:06:56.822 "raid_level": "raid0", 00:06:56.822 "superblock": true, 00:06:56.822 "num_base_bdevs": 2, 00:06:56.822 "num_base_bdevs_discovered": 2, 00:06:56.822 "num_base_bdevs_operational": 2, 00:06:56.822 "base_bdevs_list": [ 00:06:56.822 { 00:06:56.822 "name": "BaseBdev1", 00:06:56.822 "uuid": "d9eab4fd-6715-50d4-a731-7e84e5482d09", 00:06:56.822 "is_configured": true, 00:06:56.822 "data_offset": 2048, 00:06:56.822 "data_size": 63488 00:06:56.822 }, 00:06:56.822 { 00:06:56.822 "name": "BaseBdev2", 00:06:56.822 "uuid": "d1ce0a47-4cc5-54eb-ad49-ead0fe239f4f", 00:06:56.822 "is_configured": true, 00:06:56.822 "data_offset": 2048, 00:06:56.822 "data_size": 63488 00:06:56.822 } 00:06:56.822 ] 00:06:56.822 }' 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.822 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.081 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:57.082 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.082 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.082 [2024-12-14 04:56:07.942244] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:57.082 [2024-12-14 04:56:07.942276] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:57.082 [2024-12-14 04:56:07.944776] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:57.082 [2024-12-14 04:56:07.944818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.082 [2024-12-14 04:56:07.944853] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:57.082 [2024-12-14 04:56:07.944862] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:06:57.082 { 00:06:57.082 "results": [ 00:06:57.082 { 00:06:57.082 "job": "raid_bdev1", 00:06:57.082 "core_mask": "0x1", 00:06:57.082 "workload": "randrw", 00:06:57.082 "percentage": 50, 00:06:57.082 "status": "finished", 00:06:57.082 "queue_depth": 1, 00:06:57.082 "io_size": 131072, 00:06:57.082 "runtime": 1.360471, 00:06:57.082 "iops": 17942.315565712168, 00:06:57.082 "mibps": 2242.789445714021, 00:06:57.082 "io_failed": 1, 00:06:57.082 "io_timeout": 0, 00:06:57.082 "avg_latency_us": 77.13059017169402, 00:06:57.082 "min_latency_us": 24.258515283842794, 00:06:57.082 "max_latency_us": 1380.8349344978167 00:06:57.082 } 00:06:57.082 ], 00:06:57.082 "core_count": 1 00:06:57.082 } 00:06:57.082 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.082 04:56:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72821 00:06:57.082 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72821 ']' 00:06:57.082 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72821 00:06:57.082 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:06:57.082 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.082 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72821 00:06:57.342 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.342 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.342 killing process with pid 72821 00:06:57.342 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72821' 00:06:57.342 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72821 00:06:57.342 [2024-12-14 04:56:07.977501] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:57.342 04:56:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72821 00:06:57.342 [2024-12-14 04:56:07.992994] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:57.602 04:56:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nv2ti3N3v9 00:06:57.602 04:56:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:57.602 04:56:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:57.602 04:56:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:06:57.602 04:56:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:57.602 04:56:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:57.602 04:56:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:57.602 04:56:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:06:57.602 00:06:57.602 real 0m3.171s 00:06:57.602 user 0m4.007s 00:06:57.602 sys 0m0.474s 00:06:57.602 04:56:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.602 04:56:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.602 ************************************ 00:06:57.602 END TEST raid_read_error_test 00:06:57.602 ************************************ 00:06:57.602 04:56:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:06:57.602 04:56:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:57.602 04:56:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.602 04:56:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:57.602 ************************************ 00:06:57.602 START TEST raid_write_error_test 00:06:57.602 ************************************ 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0O4vBxlrBR 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72950 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72950 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 72950 ']' 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.602 04:56:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.602 [2024-12-14 04:56:08.409343] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:57.602 [2024-12-14 04:56:08.409478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72950 ] 00:06:57.861 [2024-12-14 04:56:08.567907] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.861 [2024-12-14 04:56:08.612142] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.861 [2024-12-14 04:56:08.653755] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.861 [2024-12-14 04:56:08.653788] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.430 BaseBdev1_malloc 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.430 true 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.430 [2024-12-14 04:56:09.260050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:58.430 [2024-12-14 04:56:09.260101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:58.430 [2024-12-14 04:56:09.260119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:58.430 [2024-12-14 04:56:09.260128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:58.430 [2024-12-14 04:56:09.262151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:58.430 [2024-12-14 04:56:09.262209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:58.430 BaseBdev1 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.430 BaseBdev2_malloc 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.430 true 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.430 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.690 [2024-12-14 04:56:09.316784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:58.690 [2024-12-14 04:56:09.316855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:58.690 [2024-12-14 04:56:09.316883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:58.690 [2024-12-14 04:56:09.316897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:58.690 [2024-12-14 04:56:09.319912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:58.690 [2024-12-14 04:56:09.319949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:58.690 BaseBdev2 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.690 [2024-12-14 04:56:09.328756] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:58.690 [2024-12-14 04:56:09.330646] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:58.690 [2024-12-14 04:56:09.330827] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:06:58.690 [2024-12-14 04:56:09.330844] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:58.690 [2024-12-14 04:56:09.331099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:58.690 [2024-12-14 04:56:09.331284] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:06:58.690 [2024-12-14 04:56:09.331302] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:06:58.690 [2024-12-14 04:56:09.331426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:58.690 "name": "raid_bdev1", 00:06:58.690 "uuid": "92da7f68-f655-4f51-844f-1cf1e23965ea", 00:06:58.690 "strip_size_kb": 64, 00:06:58.690 "state": "online", 00:06:58.690 "raid_level": "raid0", 00:06:58.690 "superblock": true, 00:06:58.690 "num_base_bdevs": 2, 00:06:58.690 "num_base_bdevs_discovered": 2, 00:06:58.690 "num_base_bdevs_operational": 2, 00:06:58.690 "base_bdevs_list": [ 00:06:58.690 { 00:06:58.690 "name": "BaseBdev1", 00:06:58.690 "uuid": "0ea56cc8-6851-585f-bd35-909acbb299c6", 00:06:58.690 "is_configured": true, 00:06:58.690 "data_offset": 2048, 00:06:58.690 "data_size": 63488 00:06:58.690 }, 00:06:58.690 { 00:06:58.690 "name": "BaseBdev2", 00:06:58.690 "uuid": "b031bebc-9153-55cc-a454-af307f3b1bbc", 00:06:58.690 "is_configured": true, 00:06:58.690 "data_offset": 2048, 00:06:58.690 "data_size": 63488 00:06:58.690 } 00:06:58.690 ] 00:06:58.690 }' 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:58.690 04:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.950 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:58.950 04:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:59.209 [2024-12-14 04:56:09.836261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.147 "name": "raid_bdev1", 00:07:00.147 "uuid": "92da7f68-f655-4f51-844f-1cf1e23965ea", 00:07:00.147 "strip_size_kb": 64, 00:07:00.147 "state": "online", 00:07:00.147 "raid_level": "raid0", 00:07:00.147 "superblock": true, 00:07:00.147 "num_base_bdevs": 2, 00:07:00.147 "num_base_bdevs_discovered": 2, 00:07:00.147 "num_base_bdevs_operational": 2, 00:07:00.147 "base_bdevs_list": [ 00:07:00.147 { 00:07:00.147 "name": "BaseBdev1", 00:07:00.147 "uuid": "0ea56cc8-6851-585f-bd35-909acbb299c6", 00:07:00.147 "is_configured": true, 00:07:00.147 "data_offset": 2048, 00:07:00.147 "data_size": 63488 00:07:00.147 }, 00:07:00.147 { 00:07:00.147 "name": "BaseBdev2", 00:07:00.147 "uuid": "b031bebc-9153-55cc-a454-af307f3b1bbc", 00:07:00.147 "is_configured": true, 00:07:00.147 "data_offset": 2048, 00:07:00.147 "data_size": 63488 00:07:00.147 } 00:07:00.147 ] 00:07:00.147 }' 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.147 04:56:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.406 04:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:00.406 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.406 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.406 [2024-12-14 04:56:11.207856] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:00.406 [2024-12-14 04:56:11.207891] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:00.406 [2024-12-14 04:56:11.210457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:00.406 [2024-12-14 04:56:11.210508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.406 [2024-12-14 04:56:11.210543] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:00.406 [2024-12-14 04:56:11.210557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:00.406 { 00:07:00.406 "results": [ 00:07:00.406 { 00:07:00.406 "job": "raid_bdev1", 00:07:00.406 "core_mask": "0x1", 00:07:00.406 "workload": "randrw", 00:07:00.406 "percentage": 50, 00:07:00.406 "status": "finished", 00:07:00.406 "queue_depth": 1, 00:07:00.406 "io_size": 131072, 00:07:00.406 "runtime": 1.3725, 00:07:00.406 "iops": 17772.677595628415, 00:07:00.406 "mibps": 2221.584699453552, 00:07:00.406 "io_failed": 1, 00:07:00.406 "io_timeout": 0, 00:07:00.406 "avg_latency_us": 77.77545756294143, 00:07:00.406 "min_latency_us": 24.482096069868994, 00:07:00.406 "max_latency_us": 1438.071615720524 00:07:00.406 } 00:07:00.406 ], 00:07:00.406 "core_count": 1 00:07:00.406 } 00:07:00.406 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.406 04:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72950 00:07:00.406 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 72950 ']' 00:07:00.406 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 72950 00:07:00.406 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:00.406 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.406 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72950 00:07:00.406 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.406 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.406 killing process with pid 72950 00:07:00.406 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72950' 00:07:00.406 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 72950 00:07:00.406 [2024-12-14 04:56:11.256636] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:00.406 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 72950 00:07:00.406 [2024-12-14 04:56:11.271404] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:00.665 04:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0O4vBxlrBR 00:07:00.665 04:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:00.665 04:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:00.665 04:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:00.665 04:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:00.665 04:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:00.665 04:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:00.665 04:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:00.665 00:07:00.665 real 0m3.209s 00:07:00.665 user 0m4.041s 00:07:00.665 sys 0m0.514s 00:07:00.665 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.665 04:56:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.665 ************************************ 00:07:00.665 END TEST raid_write_error_test 00:07:00.665 ************************************ 00:07:00.923 04:56:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:00.923 04:56:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:00.923 04:56:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:00.923 04:56:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.923 04:56:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:00.923 ************************************ 00:07:00.923 START TEST raid_state_function_test 00:07:00.923 ************************************ 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73077 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73077' 00:07:00.923 Process raid pid: 73077 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73077 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73077 ']' 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.923 04:56:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.923 [2024-12-14 04:56:11.683816] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:00.923 [2024-12-14 04:56:11.683962] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.182 [2024-12-14 04:56:11.844267] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.182 [2024-12-14 04:56:11.892470] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.182 [2024-12-14 04:56:11.934081] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.182 [2024-12-14 04:56:11.934124] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.751 [2024-12-14 04:56:12.499212] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:01.751 [2024-12-14 04:56:12.499268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:01.751 [2024-12-14 04:56:12.499281] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:01.751 [2024-12-14 04:56:12.499290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.751 "name": "Existed_Raid", 00:07:01.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.751 "strip_size_kb": 64, 00:07:01.751 "state": "configuring", 00:07:01.751 "raid_level": "concat", 00:07:01.751 "superblock": false, 00:07:01.751 "num_base_bdevs": 2, 00:07:01.751 "num_base_bdevs_discovered": 0, 00:07:01.751 "num_base_bdevs_operational": 2, 00:07:01.751 "base_bdevs_list": [ 00:07:01.751 { 00:07:01.751 "name": "BaseBdev1", 00:07:01.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.751 "is_configured": false, 00:07:01.751 "data_offset": 0, 00:07:01.751 "data_size": 0 00:07:01.751 }, 00:07:01.751 { 00:07:01.751 "name": "BaseBdev2", 00:07:01.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.751 "is_configured": false, 00:07:01.751 "data_offset": 0, 00:07:01.751 "data_size": 0 00:07:01.751 } 00:07:01.751 ] 00:07:01.751 }' 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.751 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.320 [2024-12-14 04:56:12.950354] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:02.320 [2024-12-14 04:56:12.950401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.320 [2024-12-14 04:56:12.962351] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:02.320 [2024-12-14 04:56:12.962407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:02.320 [2024-12-14 04:56:12.962416] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:02.320 [2024-12-14 04:56:12.962424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.320 [2024-12-14 04:56:12.983311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:02.320 BaseBdev1 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.320 04:56:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.320 [ 00:07:02.320 { 00:07:02.320 "name": "BaseBdev1", 00:07:02.320 "aliases": [ 00:07:02.320 "57c74419-3603-4876-9032-b0e225e4b14e" 00:07:02.320 ], 00:07:02.320 "product_name": "Malloc disk", 00:07:02.320 "block_size": 512, 00:07:02.320 "num_blocks": 65536, 00:07:02.320 "uuid": "57c74419-3603-4876-9032-b0e225e4b14e", 00:07:02.320 "assigned_rate_limits": { 00:07:02.320 "rw_ios_per_sec": 0, 00:07:02.320 "rw_mbytes_per_sec": 0, 00:07:02.320 "r_mbytes_per_sec": 0, 00:07:02.320 "w_mbytes_per_sec": 0 00:07:02.320 }, 00:07:02.320 "claimed": true, 00:07:02.320 "claim_type": "exclusive_write", 00:07:02.320 "zoned": false, 00:07:02.320 "supported_io_types": { 00:07:02.320 "read": true, 00:07:02.320 "write": true, 00:07:02.320 "unmap": true, 00:07:02.320 "flush": true, 00:07:02.320 "reset": true, 00:07:02.320 "nvme_admin": false, 00:07:02.320 "nvme_io": false, 00:07:02.320 "nvme_io_md": false, 00:07:02.320 "write_zeroes": true, 00:07:02.320 "zcopy": true, 00:07:02.320 "get_zone_info": false, 00:07:02.320 "zone_management": false, 00:07:02.320 "zone_append": false, 00:07:02.320 "compare": false, 00:07:02.320 "compare_and_write": false, 00:07:02.320 "abort": true, 00:07:02.320 "seek_hole": false, 00:07:02.320 "seek_data": false, 00:07:02.320 "copy": true, 00:07:02.320 "nvme_iov_md": false 00:07:02.320 }, 00:07:02.320 "memory_domains": [ 00:07:02.320 { 00:07:02.320 "dma_device_id": "system", 00:07:02.320 "dma_device_type": 1 00:07:02.320 }, 00:07:02.320 { 00:07:02.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.320 "dma_device_type": 2 00:07:02.320 } 00:07:02.320 ], 00:07:02.320 "driver_specific": {} 00:07:02.320 } 00:07:02.320 ] 00:07:02.320 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.320 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:02.320 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:02.320 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:02.320 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:02.320 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:02.320 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.320 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:02.320 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.320 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.320 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.320 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.320 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.320 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:02.320 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.320 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.320 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.320 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.320 "name": "Existed_Raid", 00:07:02.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.320 "strip_size_kb": 64, 00:07:02.320 "state": "configuring", 00:07:02.320 "raid_level": "concat", 00:07:02.320 "superblock": false, 00:07:02.320 "num_base_bdevs": 2, 00:07:02.320 "num_base_bdevs_discovered": 1, 00:07:02.320 "num_base_bdevs_operational": 2, 00:07:02.320 "base_bdevs_list": [ 00:07:02.320 { 00:07:02.320 "name": "BaseBdev1", 00:07:02.320 "uuid": "57c74419-3603-4876-9032-b0e225e4b14e", 00:07:02.320 "is_configured": true, 00:07:02.321 "data_offset": 0, 00:07:02.321 "data_size": 65536 00:07:02.321 }, 00:07:02.321 { 00:07:02.321 "name": "BaseBdev2", 00:07:02.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.321 "is_configured": false, 00:07:02.321 "data_offset": 0, 00:07:02.321 "data_size": 0 00:07:02.321 } 00:07:02.321 ] 00:07:02.321 }' 00:07:02.321 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.321 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.580 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:02.580 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.580 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.839 [2024-12-14 04:56:13.462604] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:02.839 [2024-12-14 04:56:13.462661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:02.839 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.839 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:02.839 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.839 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.839 [2024-12-14 04:56:13.474629] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:02.839 [2024-12-14 04:56:13.476428] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:02.839 [2024-12-14 04:56:13.476467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:02.839 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.839 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:02.839 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:02.839 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:02.839 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:02.839 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:02.839 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:02.839 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.839 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:02.839 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.839 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.839 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.839 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.840 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:02.840 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.840 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.840 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.840 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.840 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.840 "name": "Existed_Raid", 00:07:02.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.840 "strip_size_kb": 64, 00:07:02.840 "state": "configuring", 00:07:02.840 "raid_level": "concat", 00:07:02.840 "superblock": false, 00:07:02.840 "num_base_bdevs": 2, 00:07:02.840 "num_base_bdevs_discovered": 1, 00:07:02.840 "num_base_bdevs_operational": 2, 00:07:02.840 "base_bdevs_list": [ 00:07:02.840 { 00:07:02.840 "name": "BaseBdev1", 00:07:02.840 "uuid": "57c74419-3603-4876-9032-b0e225e4b14e", 00:07:02.840 "is_configured": true, 00:07:02.840 "data_offset": 0, 00:07:02.840 "data_size": 65536 00:07:02.840 }, 00:07:02.840 { 00:07:02.840 "name": "BaseBdev2", 00:07:02.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.840 "is_configured": false, 00:07:02.840 "data_offset": 0, 00:07:02.840 "data_size": 0 00:07:02.840 } 00:07:02.840 ] 00:07:02.840 }' 00:07:02.840 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.840 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.099 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:03.099 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.099 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.099 [2024-12-14 04:56:13.926007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:03.099 [2024-12-14 04:56:13.926134] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:03.099 [2024-12-14 04:56:13.926201] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:03.099 [2024-12-14 04:56:13.927263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:03.099 [2024-12-14 04:56:13.927705] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:03.099 [2024-12-14 04:56:13.927783] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:03.099 BaseBdev2 00:07:03.099 [2024-12-14 04:56:13.928411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.099 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.099 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:03.099 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:03.099 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:03.099 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:03.099 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:03.099 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:03.099 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:03.099 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.099 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.099 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.099 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:03.099 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.099 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.099 [ 00:07:03.099 { 00:07:03.099 "name": "BaseBdev2", 00:07:03.099 "aliases": [ 00:07:03.099 "cc63257e-8738-408e-b528-6840851c7d2d" 00:07:03.099 ], 00:07:03.099 "product_name": "Malloc disk", 00:07:03.099 "block_size": 512, 00:07:03.099 "num_blocks": 65536, 00:07:03.099 "uuid": "cc63257e-8738-408e-b528-6840851c7d2d", 00:07:03.099 "assigned_rate_limits": { 00:07:03.099 "rw_ios_per_sec": 0, 00:07:03.100 "rw_mbytes_per_sec": 0, 00:07:03.100 "r_mbytes_per_sec": 0, 00:07:03.100 "w_mbytes_per_sec": 0 00:07:03.100 }, 00:07:03.100 "claimed": true, 00:07:03.100 "claim_type": "exclusive_write", 00:07:03.100 "zoned": false, 00:07:03.100 "supported_io_types": { 00:07:03.100 "read": true, 00:07:03.100 "write": true, 00:07:03.100 "unmap": true, 00:07:03.100 "flush": true, 00:07:03.100 "reset": true, 00:07:03.100 "nvme_admin": false, 00:07:03.100 "nvme_io": false, 00:07:03.100 "nvme_io_md": false, 00:07:03.100 "write_zeroes": true, 00:07:03.100 "zcopy": true, 00:07:03.100 "get_zone_info": false, 00:07:03.100 "zone_management": false, 00:07:03.100 "zone_append": false, 00:07:03.100 "compare": false, 00:07:03.100 "compare_and_write": false, 00:07:03.100 "abort": true, 00:07:03.100 "seek_hole": false, 00:07:03.100 "seek_data": false, 00:07:03.100 "copy": true, 00:07:03.100 "nvme_iov_md": false 00:07:03.100 }, 00:07:03.100 "memory_domains": [ 00:07:03.100 { 00:07:03.100 "dma_device_id": "system", 00:07:03.100 "dma_device_type": 1 00:07:03.100 }, 00:07:03.100 { 00:07:03.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.100 "dma_device_type": 2 00:07:03.100 } 00:07:03.100 ], 00:07:03.100 "driver_specific": {} 00:07:03.100 } 00:07:03.100 ] 00:07:03.100 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.100 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:03.100 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:03.100 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:03.100 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:03.100 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.100 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:03.100 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:03.100 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.100 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.100 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.100 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.100 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.100 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.100 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.100 04:56:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.100 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.100 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.359 04:56:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.359 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.359 "name": "Existed_Raid", 00:07:03.359 "uuid": "3b4be453-98a6-40eb-85e7-ddaa32b66038", 00:07:03.359 "strip_size_kb": 64, 00:07:03.359 "state": "online", 00:07:03.359 "raid_level": "concat", 00:07:03.359 "superblock": false, 00:07:03.359 "num_base_bdevs": 2, 00:07:03.359 "num_base_bdevs_discovered": 2, 00:07:03.359 "num_base_bdevs_operational": 2, 00:07:03.359 "base_bdevs_list": [ 00:07:03.359 { 00:07:03.359 "name": "BaseBdev1", 00:07:03.359 "uuid": "57c74419-3603-4876-9032-b0e225e4b14e", 00:07:03.359 "is_configured": true, 00:07:03.359 "data_offset": 0, 00:07:03.359 "data_size": 65536 00:07:03.359 }, 00:07:03.359 { 00:07:03.359 "name": "BaseBdev2", 00:07:03.359 "uuid": "cc63257e-8738-408e-b528-6840851c7d2d", 00:07:03.359 "is_configured": true, 00:07:03.359 "data_offset": 0, 00:07:03.359 "data_size": 65536 00:07:03.359 } 00:07:03.359 ] 00:07:03.359 }' 00:07:03.359 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.359 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.619 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:03.619 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:03.619 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:03.619 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:03.619 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:03.619 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:03.619 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:03.619 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:03.619 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.619 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.619 [2024-12-14 04:56:14.369457] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.619 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.619 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:03.619 "name": "Existed_Raid", 00:07:03.619 "aliases": [ 00:07:03.619 "3b4be453-98a6-40eb-85e7-ddaa32b66038" 00:07:03.619 ], 00:07:03.619 "product_name": "Raid Volume", 00:07:03.619 "block_size": 512, 00:07:03.619 "num_blocks": 131072, 00:07:03.619 "uuid": "3b4be453-98a6-40eb-85e7-ddaa32b66038", 00:07:03.619 "assigned_rate_limits": { 00:07:03.619 "rw_ios_per_sec": 0, 00:07:03.619 "rw_mbytes_per_sec": 0, 00:07:03.619 "r_mbytes_per_sec": 0, 00:07:03.619 "w_mbytes_per_sec": 0 00:07:03.619 }, 00:07:03.619 "claimed": false, 00:07:03.619 "zoned": false, 00:07:03.619 "supported_io_types": { 00:07:03.619 "read": true, 00:07:03.619 "write": true, 00:07:03.619 "unmap": true, 00:07:03.619 "flush": true, 00:07:03.619 "reset": true, 00:07:03.619 "nvme_admin": false, 00:07:03.619 "nvme_io": false, 00:07:03.619 "nvme_io_md": false, 00:07:03.619 "write_zeroes": true, 00:07:03.619 "zcopy": false, 00:07:03.619 "get_zone_info": false, 00:07:03.619 "zone_management": false, 00:07:03.619 "zone_append": false, 00:07:03.619 "compare": false, 00:07:03.619 "compare_and_write": false, 00:07:03.619 "abort": false, 00:07:03.619 "seek_hole": false, 00:07:03.619 "seek_data": false, 00:07:03.619 "copy": false, 00:07:03.619 "nvme_iov_md": false 00:07:03.619 }, 00:07:03.619 "memory_domains": [ 00:07:03.619 { 00:07:03.619 "dma_device_id": "system", 00:07:03.619 "dma_device_type": 1 00:07:03.619 }, 00:07:03.619 { 00:07:03.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.619 "dma_device_type": 2 00:07:03.619 }, 00:07:03.619 { 00:07:03.619 "dma_device_id": "system", 00:07:03.619 "dma_device_type": 1 00:07:03.619 }, 00:07:03.619 { 00:07:03.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.619 "dma_device_type": 2 00:07:03.619 } 00:07:03.619 ], 00:07:03.619 "driver_specific": { 00:07:03.619 "raid": { 00:07:03.619 "uuid": "3b4be453-98a6-40eb-85e7-ddaa32b66038", 00:07:03.619 "strip_size_kb": 64, 00:07:03.619 "state": "online", 00:07:03.619 "raid_level": "concat", 00:07:03.619 "superblock": false, 00:07:03.619 "num_base_bdevs": 2, 00:07:03.619 "num_base_bdevs_discovered": 2, 00:07:03.619 "num_base_bdevs_operational": 2, 00:07:03.619 "base_bdevs_list": [ 00:07:03.619 { 00:07:03.619 "name": "BaseBdev1", 00:07:03.619 "uuid": "57c74419-3603-4876-9032-b0e225e4b14e", 00:07:03.619 "is_configured": true, 00:07:03.619 "data_offset": 0, 00:07:03.619 "data_size": 65536 00:07:03.619 }, 00:07:03.619 { 00:07:03.619 "name": "BaseBdev2", 00:07:03.619 "uuid": "cc63257e-8738-408e-b528-6840851c7d2d", 00:07:03.619 "is_configured": true, 00:07:03.619 "data_offset": 0, 00:07:03.619 "data_size": 65536 00:07:03.619 } 00:07:03.619 ] 00:07:03.619 } 00:07:03.619 } 00:07:03.619 }' 00:07:03.619 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:03.619 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:03.619 BaseBdev2' 00:07:03.619 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.883 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:03.883 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.883 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:03.883 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.883 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.883 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.883 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.883 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.883 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.883 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.883 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:03.883 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.884 [2024-12-14 04:56:14.616803] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:03.884 [2024-12-14 04:56:14.616833] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:03.884 [2024-12-14 04:56:14.616886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.884 "name": "Existed_Raid", 00:07:03.884 "uuid": "3b4be453-98a6-40eb-85e7-ddaa32b66038", 00:07:03.884 "strip_size_kb": 64, 00:07:03.884 "state": "offline", 00:07:03.884 "raid_level": "concat", 00:07:03.884 "superblock": false, 00:07:03.884 "num_base_bdevs": 2, 00:07:03.884 "num_base_bdevs_discovered": 1, 00:07:03.884 "num_base_bdevs_operational": 1, 00:07:03.884 "base_bdevs_list": [ 00:07:03.884 { 00:07:03.884 "name": null, 00:07:03.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.884 "is_configured": false, 00:07:03.884 "data_offset": 0, 00:07:03.884 "data_size": 65536 00:07:03.884 }, 00:07:03.884 { 00:07:03.884 "name": "BaseBdev2", 00:07:03.884 "uuid": "cc63257e-8738-408e-b528-6840851c7d2d", 00:07:03.884 "is_configured": true, 00:07:03.884 "data_offset": 0, 00:07:03.884 "data_size": 65536 00:07:03.884 } 00:07:03.884 ] 00:07:03.884 }' 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.884 04:56:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.474 [2024-12-14 04:56:15.131247] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:04.474 [2024-12-14 04:56:15.131306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73077 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73077 ']' 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73077 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73077 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.474 killing process with pid 73077 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73077' 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73077 00:07:04.474 [2024-12-14 04:56:15.233857] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:04.474 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73077 00:07:04.474 [2024-12-14 04:56:15.234802] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:04.733 00:07:04.733 real 0m3.883s 00:07:04.733 user 0m6.112s 00:07:04.733 sys 0m0.762s 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.733 ************************************ 00:07:04.733 END TEST raid_state_function_test 00:07:04.733 ************************************ 00:07:04.733 04:56:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:04.733 04:56:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:04.733 04:56:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.733 04:56:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:04.733 ************************************ 00:07:04.733 START TEST raid_state_function_test_sb 00:07:04.733 ************************************ 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73319 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:04.733 Process raid pid: 73319 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73319' 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73319 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73319 ']' 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.733 04:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.734 04:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.734 04:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.993 [2024-12-14 04:56:15.638999] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:04.993 [2024-12-14 04:56:15.639154] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.993 [2024-12-14 04:56:15.799525] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.993 [2024-12-14 04:56:15.844504] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.251 [2024-12-14 04:56:15.886833] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.251 [2024-12-14 04:56:15.886872] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.819 [2024-12-14 04:56:16.465485] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:05.819 [2024-12-14 04:56:16.465538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:05.819 [2024-12-14 04:56:16.465557] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:05.819 [2024-12-14 04:56:16.465567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.819 "name": "Existed_Raid", 00:07:05.819 "uuid": "151ccc97-254f-4448-93a9-c6d6769a1ef7", 00:07:05.819 "strip_size_kb": 64, 00:07:05.819 "state": "configuring", 00:07:05.819 "raid_level": "concat", 00:07:05.819 "superblock": true, 00:07:05.819 "num_base_bdevs": 2, 00:07:05.819 "num_base_bdevs_discovered": 0, 00:07:05.819 "num_base_bdevs_operational": 2, 00:07:05.819 "base_bdevs_list": [ 00:07:05.819 { 00:07:05.819 "name": "BaseBdev1", 00:07:05.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.819 "is_configured": false, 00:07:05.819 "data_offset": 0, 00:07:05.819 "data_size": 0 00:07:05.819 }, 00:07:05.819 { 00:07:05.819 "name": "BaseBdev2", 00:07:05.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.819 "is_configured": false, 00:07:05.819 "data_offset": 0, 00:07:05.819 "data_size": 0 00:07:05.819 } 00:07:05.819 ] 00:07:05.819 }' 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.819 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.079 [2024-12-14 04:56:16.836738] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:06.079 [2024-12-14 04:56:16.836786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.079 [2024-12-14 04:56:16.848751] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:06.079 [2024-12-14 04:56:16.848787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:06.079 [2024-12-14 04:56:16.848795] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:06.079 [2024-12-14 04:56:16.848804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.079 [2024-12-14 04:56:16.869362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:06.079 BaseBdev1 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.079 [ 00:07:06.079 { 00:07:06.079 "name": "BaseBdev1", 00:07:06.079 "aliases": [ 00:07:06.079 "f79d5d0f-ed4d-4a24-9738-bd90c2be560d" 00:07:06.079 ], 00:07:06.079 "product_name": "Malloc disk", 00:07:06.079 "block_size": 512, 00:07:06.079 "num_blocks": 65536, 00:07:06.079 "uuid": "f79d5d0f-ed4d-4a24-9738-bd90c2be560d", 00:07:06.079 "assigned_rate_limits": { 00:07:06.079 "rw_ios_per_sec": 0, 00:07:06.079 "rw_mbytes_per_sec": 0, 00:07:06.079 "r_mbytes_per_sec": 0, 00:07:06.079 "w_mbytes_per_sec": 0 00:07:06.079 }, 00:07:06.079 "claimed": true, 00:07:06.079 "claim_type": "exclusive_write", 00:07:06.079 "zoned": false, 00:07:06.079 "supported_io_types": { 00:07:06.079 "read": true, 00:07:06.079 "write": true, 00:07:06.079 "unmap": true, 00:07:06.079 "flush": true, 00:07:06.079 "reset": true, 00:07:06.079 "nvme_admin": false, 00:07:06.079 "nvme_io": false, 00:07:06.079 "nvme_io_md": false, 00:07:06.079 "write_zeroes": true, 00:07:06.079 "zcopy": true, 00:07:06.079 "get_zone_info": false, 00:07:06.079 "zone_management": false, 00:07:06.079 "zone_append": false, 00:07:06.079 "compare": false, 00:07:06.079 "compare_and_write": false, 00:07:06.079 "abort": true, 00:07:06.079 "seek_hole": false, 00:07:06.079 "seek_data": false, 00:07:06.079 "copy": true, 00:07:06.079 "nvme_iov_md": false 00:07:06.079 }, 00:07:06.079 "memory_domains": [ 00:07:06.079 { 00:07:06.079 "dma_device_id": "system", 00:07:06.079 "dma_device_type": 1 00:07:06.079 }, 00:07:06.079 { 00:07:06.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.079 "dma_device_type": 2 00:07:06.079 } 00:07:06.079 ], 00:07:06.079 "driver_specific": {} 00:07:06.079 } 00:07:06.079 ] 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.079 "name": "Existed_Raid", 00:07:06.079 "uuid": "54da3427-12d0-43a8-a804-f5de310fd56a", 00:07:06.079 "strip_size_kb": 64, 00:07:06.079 "state": "configuring", 00:07:06.079 "raid_level": "concat", 00:07:06.079 "superblock": true, 00:07:06.079 "num_base_bdevs": 2, 00:07:06.079 "num_base_bdevs_discovered": 1, 00:07:06.079 "num_base_bdevs_operational": 2, 00:07:06.079 "base_bdevs_list": [ 00:07:06.079 { 00:07:06.079 "name": "BaseBdev1", 00:07:06.079 "uuid": "f79d5d0f-ed4d-4a24-9738-bd90c2be560d", 00:07:06.079 "is_configured": true, 00:07:06.079 "data_offset": 2048, 00:07:06.079 "data_size": 63488 00:07:06.079 }, 00:07:06.079 { 00:07:06.079 "name": "BaseBdev2", 00:07:06.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.079 "is_configured": false, 00:07:06.079 "data_offset": 0, 00:07:06.079 "data_size": 0 00:07:06.079 } 00:07:06.079 ] 00:07:06.079 }' 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.079 04:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.648 [2024-12-14 04:56:17.336576] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:06.648 [2024-12-14 04:56:17.336625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.648 [2024-12-14 04:56:17.348599] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:06.648 [2024-12-14 04:56:17.350482] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:06.648 [2024-12-14 04:56:17.350522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.648 "name": "Existed_Raid", 00:07:06.648 "uuid": "5debb268-bc46-478e-8267-9e66ceaa7c3b", 00:07:06.648 "strip_size_kb": 64, 00:07:06.648 "state": "configuring", 00:07:06.648 "raid_level": "concat", 00:07:06.648 "superblock": true, 00:07:06.648 "num_base_bdevs": 2, 00:07:06.648 "num_base_bdevs_discovered": 1, 00:07:06.648 "num_base_bdevs_operational": 2, 00:07:06.648 "base_bdevs_list": [ 00:07:06.648 { 00:07:06.648 "name": "BaseBdev1", 00:07:06.648 "uuid": "f79d5d0f-ed4d-4a24-9738-bd90c2be560d", 00:07:06.648 "is_configured": true, 00:07:06.648 "data_offset": 2048, 00:07:06.648 "data_size": 63488 00:07:06.648 }, 00:07:06.648 { 00:07:06.648 "name": "BaseBdev2", 00:07:06.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.648 "is_configured": false, 00:07:06.648 "data_offset": 0, 00:07:06.648 "data_size": 0 00:07:06.648 } 00:07:06.648 ] 00:07:06.648 }' 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.648 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.217 [2024-12-14 04:56:17.817524] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:07.217 [2024-12-14 04:56:17.818077] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:07.217 [2024-12-14 04:56:17.818139] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:07.217 BaseBdev2 00:07:07.217 [2024-12-14 04:56:17.819010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.217 [2024-12-14 04:56:17.819507] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:07.217 [2024-12-14 04:56:17.819563] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:07.217 [2024-12-14 04:56:17.819920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.217 [ 00:07:07.217 { 00:07:07.217 "name": "BaseBdev2", 00:07:07.217 "aliases": [ 00:07:07.217 "e3fad134-c66d-4dfa-bc21-347c6606afc5" 00:07:07.217 ], 00:07:07.217 "product_name": "Malloc disk", 00:07:07.217 "block_size": 512, 00:07:07.217 "num_blocks": 65536, 00:07:07.217 "uuid": "e3fad134-c66d-4dfa-bc21-347c6606afc5", 00:07:07.217 "assigned_rate_limits": { 00:07:07.217 "rw_ios_per_sec": 0, 00:07:07.217 "rw_mbytes_per_sec": 0, 00:07:07.217 "r_mbytes_per_sec": 0, 00:07:07.217 "w_mbytes_per_sec": 0 00:07:07.217 }, 00:07:07.217 "claimed": true, 00:07:07.217 "claim_type": "exclusive_write", 00:07:07.217 "zoned": false, 00:07:07.217 "supported_io_types": { 00:07:07.217 "read": true, 00:07:07.217 "write": true, 00:07:07.217 "unmap": true, 00:07:07.217 "flush": true, 00:07:07.217 "reset": true, 00:07:07.217 "nvme_admin": false, 00:07:07.217 "nvme_io": false, 00:07:07.217 "nvme_io_md": false, 00:07:07.217 "write_zeroes": true, 00:07:07.217 "zcopy": true, 00:07:07.217 "get_zone_info": false, 00:07:07.217 "zone_management": false, 00:07:07.217 "zone_append": false, 00:07:07.217 "compare": false, 00:07:07.217 "compare_and_write": false, 00:07:07.217 "abort": true, 00:07:07.217 "seek_hole": false, 00:07:07.217 "seek_data": false, 00:07:07.217 "copy": true, 00:07:07.217 "nvme_iov_md": false 00:07:07.217 }, 00:07:07.217 "memory_domains": [ 00:07:07.217 { 00:07:07.217 "dma_device_id": "system", 00:07:07.217 "dma_device_type": 1 00:07:07.217 }, 00:07:07.217 { 00:07:07.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.217 "dma_device_type": 2 00:07:07.217 } 00:07:07.217 ], 00:07:07.217 "driver_specific": {} 00:07:07.217 } 00:07:07.217 ] 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.217 "name": "Existed_Raid", 00:07:07.217 "uuid": "5debb268-bc46-478e-8267-9e66ceaa7c3b", 00:07:07.217 "strip_size_kb": 64, 00:07:07.217 "state": "online", 00:07:07.217 "raid_level": "concat", 00:07:07.217 "superblock": true, 00:07:07.217 "num_base_bdevs": 2, 00:07:07.217 "num_base_bdevs_discovered": 2, 00:07:07.217 "num_base_bdevs_operational": 2, 00:07:07.217 "base_bdevs_list": [ 00:07:07.217 { 00:07:07.217 "name": "BaseBdev1", 00:07:07.217 "uuid": "f79d5d0f-ed4d-4a24-9738-bd90c2be560d", 00:07:07.217 "is_configured": true, 00:07:07.217 "data_offset": 2048, 00:07:07.217 "data_size": 63488 00:07:07.217 }, 00:07:07.217 { 00:07:07.217 "name": "BaseBdev2", 00:07:07.217 "uuid": "e3fad134-c66d-4dfa-bc21-347c6606afc5", 00:07:07.217 "is_configured": true, 00:07:07.217 "data_offset": 2048, 00:07:07.217 "data_size": 63488 00:07:07.217 } 00:07:07.217 ] 00:07:07.217 }' 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.217 04:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.476 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:07.477 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:07.477 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:07.477 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:07.477 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:07.477 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:07.477 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:07.477 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:07.477 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.477 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.477 [2024-12-14 04:56:18.304916] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.477 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.477 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:07.477 "name": "Existed_Raid", 00:07:07.477 "aliases": [ 00:07:07.477 "5debb268-bc46-478e-8267-9e66ceaa7c3b" 00:07:07.477 ], 00:07:07.477 "product_name": "Raid Volume", 00:07:07.477 "block_size": 512, 00:07:07.477 "num_blocks": 126976, 00:07:07.477 "uuid": "5debb268-bc46-478e-8267-9e66ceaa7c3b", 00:07:07.477 "assigned_rate_limits": { 00:07:07.477 "rw_ios_per_sec": 0, 00:07:07.477 "rw_mbytes_per_sec": 0, 00:07:07.477 "r_mbytes_per_sec": 0, 00:07:07.477 "w_mbytes_per_sec": 0 00:07:07.477 }, 00:07:07.477 "claimed": false, 00:07:07.477 "zoned": false, 00:07:07.477 "supported_io_types": { 00:07:07.477 "read": true, 00:07:07.477 "write": true, 00:07:07.477 "unmap": true, 00:07:07.477 "flush": true, 00:07:07.477 "reset": true, 00:07:07.477 "nvme_admin": false, 00:07:07.477 "nvme_io": false, 00:07:07.477 "nvme_io_md": false, 00:07:07.477 "write_zeroes": true, 00:07:07.477 "zcopy": false, 00:07:07.477 "get_zone_info": false, 00:07:07.477 "zone_management": false, 00:07:07.477 "zone_append": false, 00:07:07.477 "compare": false, 00:07:07.477 "compare_and_write": false, 00:07:07.477 "abort": false, 00:07:07.477 "seek_hole": false, 00:07:07.477 "seek_data": false, 00:07:07.477 "copy": false, 00:07:07.477 "nvme_iov_md": false 00:07:07.477 }, 00:07:07.477 "memory_domains": [ 00:07:07.477 { 00:07:07.477 "dma_device_id": "system", 00:07:07.477 "dma_device_type": 1 00:07:07.477 }, 00:07:07.477 { 00:07:07.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.477 "dma_device_type": 2 00:07:07.477 }, 00:07:07.477 { 00:07:07.477 "dma_device_id": "system", 00:07:07.477 "dma_device_type": 1 00:07:07.477 }, 00:07:07.477 { 00:07:07.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.477 "dma_device_type": 2 00:07:07.477 } 00:07:07.477 ], 00:07:07.477 "driver_specific": { 00:07:07.477 "raid": { 00:07:07.477 "uuid": "5debb268-bc46-478e-8267-9e66ceaa7c3b", 00:07:07.477 "strip_size_kb": 64, 00:07:07.477 "state": "online", 00:07:07.477 "raid_level": "concat", 00:07:07.477 "superblock": true, 00:07:07.477 "num_base_bdevs": 2, 00:07:07.477 "num_base_bdevs_discovered": 2, 00:07:07.477 "num_base_bdevs_operational": 2, 00:07:07.477 "base_bdevs_list": [ 00:07:07.477 { 00:07:07.477 "name": "BaseBdev1", 00:07:07.477 "uuid": "f79d5d0f-ed4d-4a24-9738-bd90c2be560d", 00:07:07.477 "is_configured": true, 00:07:07.477 "data_offset": 2048, 00:07:07.477 "data_size": 63488 00:07:07.477 }, 00:07:07.477 { 00:07:07.477 "name": "BaseBdev2", 00:07:07.477 "uuid": "e3fad134-c66d-4dfa-bc21-347c6606afc5", 00:07:07.477 "is_configured": true, 00:07:07.477 "data_offset": 2048, 00:07:07.477 "data_size": 63488 00:07:07.477 } 00:07:07.477 ] 00:07:07.477 } 00:07:07.477 } 00:07:07.477 }' 00:07:07.477 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:07.737 BaseBdev2' 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.737 [2024-12-14 04:56:18.508328] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:07.737 [2024-12-14 04:56:18.508361] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:07.737 [2024-12-14 04:56:18.508414] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.737 "name": "Existed_Raid", 00:07:07.737 "uuid": "5debb268-bc46-478e-8267-9e66ceaa7c3b", 00:07:07.737 "strip_size_kb": 64, 00:07:07.737 "state": "offline", 00:07:07.737 "raid_level": "concat", 00:07:07.737 "superblock": true, 00:07:07.737 "num_base_bdevs": 2, 00:07:07.737 "num_base_bdevs_discovered": 1, 00:07:07.737 "num_base_bdevs_operational": 1, 00:07:07.737 "base_bdevs_list": [ 00:07:07.737 { 00:07:07.737 "name": null, 00:07:07.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.737 "is_configured": false, 00:07:07.737 "data_offset": 0, 00:07:07.737 "data_size": 63488 00:07:07.737 }, 00:07:07.737 { 00:07:07.737 "name": "BaseBdev2", 00:07:07.737 "uuid": "e3fad134-c66d-4dfa-bc21-347c6606afc5", 00:07:07.737 "is_configured": true, 00:07:07.737 "data_offset": 2048, 00:07:07.737 "data_size": 63488 00:07:07.737 } 00:07:07.737 ] 00:07:07.737 }' 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.737 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.306 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:08.306 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:08.306 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:08.306 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.306 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.306 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.306 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.306 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:08.306 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:08.306 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:08.306 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.306 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.306 [2024-12-14 04:56:18.978802] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:08.306 [2024-12-14 04:56:18.978858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:08.306 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.306 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:08.306 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:08.306 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.306 04:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:08.306 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.306 04:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.306 04:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.306 04:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:08.306 04:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:08.306 04:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:08.306 04:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73319 00:07:08.306 04:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73319 ']' 00:07:08.306 04:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73319 00:07:08.306 04:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:08.306 04:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.306 04:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73319 00:07:08.306 04:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.306 04:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.306 killing process with pid 73319 00:07:08.306 04:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73319' 00:07:08.306 04:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73319 00:07:08.306 [2024-12-14 04:56:19.074828] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:08.306 04:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73319 00:07:08.306 [2024-12-14 04:56:19.075809] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.565 04:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:08.565 00:07:08.565 real 0m3.772s 00:07:08.565 user 0m5.954s 00:07:08.565 sys 0m0.712s 00:07:08.565 04:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.565 04:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.565 ************************************ 00:07:08.565 END TEST raid_state_function_test_sb 00:07:08.565 ************************************ 00:07:08.565 04:56:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:08.565 04:56:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:08.565 04:56:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.565 04:56:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.565 ************************************ 00:07:08.565 START TEST raid_superblock_test 00:07:08.565 ************************************ 00:07:08.565 04:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:07:08.565 04:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:08.565 04:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:08.565 04:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:08.565 04:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:08.565 04:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:08.565 04:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:08.565 04:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:08.565 04:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:08.565 04:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:08.566 04:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:08.566 04:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:08.566 04:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:08.566 04:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:08.566 04:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:08.566 04:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:08.566 04:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:08.566 04:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73560 00:07:08.566 04:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73560 00:07:08.566 04:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:08.566 04:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73560 ']' 00:07:08.566 04:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.566 04:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.566 04:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.566 04:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.566 04:56:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.825 [2024-12-14 04:56:19.479991] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:08.825 [2024-12-14 04:56:19.480129] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73560 ] 00:07:08.825 [2024-12-14 04:56:19.641148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.825 [2024-12-14 04:56:19.685922] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.084 [2024-12-14 04:56:19.727690] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.084 [2024-12-14 04:56:19.727734] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.653 malloc1 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.653 [2024-12-14 04:56:20.317413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:09.653 [2024-12-14 04:56:20.317490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.653 [2024-12-14 04:56:20.317512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:09.653 [2024-12-14 04:56:20.317526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.653 [2024-12-14 04:56:20.319849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.653 [2024-12-14 04:56:20.319889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:09.653 pt1 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.653 malloc2 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.653 [2024-12-14 04:56:20.363821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:09.653 [2024-12-14 04:56:20.363902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.653 [2024-12-14 04:56:20.363930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:09.653 [2024-12-14 04:56:20.363948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.653 [2024-12-14 04:56:20.366823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.653 [2024-12-14 04:56:20.366864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:09.653 pt2 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.653 [2024-12-14 04:56:20.375791] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:09.653 [2024-12-14 04:56:20.377582] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:09.653 [2024-12-14 04:56:20.377714] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:09.653 [2024-12-14 04:56:20.377728] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:09.653 [2024-12-14 04:56:20.377984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:09.653 [2024-12-14 04:56:20.378119] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:09.653 [2024-12-14 04:56:20.378133] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:09.653 [2024-12-14 04:56:20.378271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:09.653 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.654 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.654 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.654 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.654 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.654 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.654 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.654 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:09.654 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.654 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.654 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.654 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.654 "name": "raid_bdev1", 00:07:09.654 "uuid": "6b763199-8151-4ef0-8ce8-5cf13f2ea917", 00:07:09.654 "strip_size_kb": 64, 00:07:09.654 "state": "online", 00:07:09.654 "raid_level": "concat", 00:07:09.654 "superblock": true, 00:07:09.654 "num_base_bdevs": 2, 00:07:09.654 "num_base_bdevs_discovered": 2, 00:07:09.654 "num_base_bdevs_operational": 2, 00:07:09.654 "base_bdevs_list": [ 00:07:09.654 { 00:07:09.654 "name": "pt1", 00:07:09.654 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:09.654 "is_configured": true, 00:07:09.654 "data_offset": 2048, 00:07:09.654 "data_size": 63488 00:07:09.654 }, 00:07:09.654 { 00:07:09.654 "name": "pt2", 00:07:09.654 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:09.654 "is_configured": true, 00:07:09.654 "data_offset": 2048, 00:07:09.654 "data_size": 63488 00:07:09.654 } 00:07:09.654 ] 00:07:09.654 }' 00:07:09.654 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.654 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:10.223 [2024-12-14 04:56:20.811444] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:10.223 "name": "raid_bdev1", 00:07:10.223 "aliases": [ 00:07:10.223 "6b763199-8151-4ef0-8ce8-5cf13f2ea917" 00:07:10.223 ], 00:07:10.223 "product_name": "Raid Volume", 00:07:10.223 "block_size": 512, 00:07:10.223 "num_blocks": 126976, 00:07:10.223 "uuid": "6b763199-8151-4ef0-8ce8-5cf13f2ea917", 00:07:10.223 "assigned_rate_limits": { 00:07:10.223 "rw_ios_per_sec": 0, 00:07:10.223 "rw_mbytes_per_sec": 0, 00:07:10.223 "r_mbytes_per_sec": 0, 00:07:10.223 "w_mbytes_per_sec": 0 00:07:10.223 }, 00:07:10.223 "claimed": false, 00:07:10.223 "zoned": false, 00:07:10.223 "supported_io_types": { 00:07:10.223 "read": true, 00:07:10.223 "write": true, 00:07:10.223 "unmap": true, 00:07:10.223 "flush": true, 00:07:10.223 "reset": true, 00:07:10.223 "nvme_admin": false, 00:07:10.223 "nvme_io": false, 00:07:10.223 "nvme_io_md": false, 00:07:10.223 "write_zeroes": true, 00:07:10.223 "zcopy": false, 00:07:10.223 "get_zone_info": false, 00:07:10.223 "zone_management": false, 00:07:10.223 "zone_append": false, 00:07:10.223 "compare": false, 00:07:10.223 "compare_and_write": false, 00:07:10.223 "abort": false, 00:07:10.223 "seek_hole": false, 00:07:10.223 "seek_data": false, 00:07:10.223 "copy": false, 00:07:10.223 "nvme_iov_md": false 00:07:10.223 }, 00:07:10.223 "memory_domains": [ 00:07:10.223 { 00:07:10.223 "dma_device_id": "system", 00:07:10.223 "dma_device_type": 1 00:07:10.223 }, 00:07:10.223 { 00:07:10.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.223 "dma_device_type": 2 00:07:10.223 }, 00:07:10.223 { 00:07:10.223 "dma_device_id": "system", 00:07:10.223 "dma_device_type": 1 00:07:10.223 }, 00:07:10.223 { 00:07:10.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.223 "dma_device_type": 2 00:07:10.223 } 00:07:10.223 ], 00:07:10.223 "driver_specific": { 00:07:10.223 "raid": { 00:07:10.223 "uuid": "6b763199-8151-4ef0-8ce8-5cf13f2ea917", 00:07:10.223 "strip_size_kb": 64, 00:07:10.223 "state": "online", 00:07:10.223 "raid_level": "concat", 00:07:10.223 "superblock": true, 00:07:10.223 "num_base_bdevs": 2, 00:07:10.223 "num_base_bdevs_discovered": 2, 00:07:10.223 "num_base_bdevs_operational": 2, 00:07:10.223 "base_bdevs_list": [ 00:07:10.223 { 00:07:10.223 "name": "pt1", 00:07:10.223 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:10.223 "is_configured": true, 00:07:10.223 "data_offset": 2048, 00:07:10.223 "data_size": 63488 00:07:10.223 }, 00:07:10.223 { 00:07:10.223 "name": "pt2", 00:07:10.223 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:10.223 "is_configured": true, 00:07:10.223 "data_offset": 2048, 00:07:10.223 "data_size": 63488 00:07:10.223 } 00:07:10.223 ] 00:07:10.223 } 00:07:10.223 } 00:07:10.223 }' 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:10.223 pt2' 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.223 04:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.223 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:10.223 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:10.223 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:10.223 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:10.223 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.223 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.223 [2024-12-14 04:56:21.022896] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.223 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.223 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6b763199-8151-4ef0-8ce8-5cf13f2ea917 00:07:10.223 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6b763199-8151-4ef0-8ce8-5cf13f2ea917 ']' 00:07:10.223 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:10.223 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.223 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.223 [2024-12-14 04:56:21.050631] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:10.223 [2024-12-14 04:56:21.050660] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:10.223 [2024-12-14 04:56:21.050727] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:10.223 [2024-12-14 04:56:21.050775] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:10.223 [2024-12-14 04:56:21.050792] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:10.223 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.223 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.223 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.223 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:10.223 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.223 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.483 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.483 [2024-12-14 04:56:21.178458] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:10.483 [2024-12-14 04:56:21.180277] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:10.483 [2024-12-14 04:56:21.180347] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:10.483 [2024-12-14 04:56:21.180387] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:10.483 [2024-12-14 04:56:21.180402] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:10.484 [2024-12-14 04:56:21.180410] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:10.484 request: 00:07:10.484 { 00:07:10.484 "name": "raid_bdev1", 00:07:10.484 "raid_level": "concat", 00:07:10.484 "base_bdevs": [ 00:07:10.484 "malloc1", 00:07:10.484 "malloc2" 00:07:10.484 ], 00:07:10.484 "strip_size_kb": 64, 00:07:10.484 "superblock": false, 00:07:10.484 "method": "bdev_raid_create", 00:07:10.484 "req_id": 1 00:07:10.484 } 00:07:10.484 Got JSON-RPC error response 00:07:10.484 response: 00:07:10.484 { 00:07:10.484 "code": -17, 00:07:10.484 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:10.484 } 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.484 [2024-12-14 04:56:21.242309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:10.484 [2024-12-14 04:56:21.242351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.484 [2024-12-14 04:56:21.242366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:10.484 [2024-12-14 04:56:21.242374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.484 [2024-12-14 04:56:21.244417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.484 [2024-12-14 04:56:21.244451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:10.484 [2024-12-14 04:56:21.244510] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:10.484 [2024-12-14 04:56:21.244552] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:10.484 pt1 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.484 "name": "raid_bdev1", 00:07:10.484 "uuid": "6b763199-8151-4ef0-8ce8-5cf13f2ea917", 00:07:10.484 "strip_size_kb": 64, 00:07:10.484 "state": "configuring", 00:07:10.484 "raid_level": "concat", 00:07:10.484 "superblock": true, 00:07:10.484 "num_base_bdevs": 2, 00:07:10.484 "num_base_bdevs_discovered": 1, 00:07:10.484 "num_base_bdevs_operational": 2, 00:07:10.484 "base_bdevs_list": [ 00:07:10.484 { 00:07:10.484 "name": "pt1", 00:07:10.484 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:10.484 "is_configured": true, 00:07:10.484 "data_offset": 2048, 00:07:10.484 "data_size": 63488 00:07:10.484 }, 00:07:10.484 { 00:07:10.484 "name": null, 00:07:10.484 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:10.484 "is_configured": false, 00:07:10.484 "data_offset": 2048, 00:07:10.484 "data_size": 63488 00:07:10.484 } 00:07:10.484 ] 00:07:10.484 }' 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.484 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.053 [2024-12-14 04:56:21.677577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:11.053 [2024-12-14 04:56:21.677635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:11.053 [2024-12-14 04:56:21.677657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:11.053 [2024-12-14 04:56:21.677666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:11.053 [2024-12-14 04:56:21.678056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:11.053 [2024-12-14 04:56:21.678081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:11.053 [2024-12-14 04:56:21.678151] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:11.053 [2024-12-14 04:56:21.678186] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:11.053 [2024-12-14 04:56:21.678277] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:11.053 [2024-12-14 04:56:21.678285] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:11.053 [2024-12-14 04:56:21.678512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:11.053 [2024-12-14 04:56:21.678628] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:11.053 [2024-12-14 04:56:21.678647] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:11.053 [2024-12-14 04:56:21.678744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:11.053 pt2 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.053 "name": "raid_bdev1", 00:07:11.053 "uuid": "6b763199-8151-4ef0-8ce8-5cf13f2ea917", 00:07:11.053 "strip_size_kb": 64, 00:07:11.053 "state": "online", 00:07:11.053 "raid_level": "concat", 00:07:11.053 "superblock": true, 00:07:11.053 "num_base_bdevs": 2, 00:07:11.053 "num_base_bdevs_discovered": 2, 00:07:11.053 "num_base_bdevs_operational": 2, 00:07:11.053 "base_bdevs_list": [ 00:07:11.053 { 00:07:11.053 "name": "pt1", 00:07:11.053 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:11.053 "is_configured": true, 00:07:11.053 "data_offset": 2048, 00:07:11.053 "data_size": 63488 00:07:11.053 }, 00:07:11.053 { 00:07:11.053 "name": "pt2", 00:07:11.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:11.053 "is_configured": true, 00:07:11.053 "data_offset": 2048, 00:07:11.053 "data_size": 63488 00:07:11.053 } 00:07:11.053 ] 00:07:11.053 }' 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.053 04:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.312 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:11.312 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:11.312 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:11.312 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:11.312 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:11.312 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:11.312 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:11.312 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.312 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.312 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:11.312 [2024-12-14 04:56:22.077196] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:11.312 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.312 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:11.312 "name": "raid_bdev1", 00:07:11.312 "aliases": [ 00:07:11.312 "6b763199-8151-4ef0-8ce8-5cf13f2ea917" 00:07:11.312 ], 00:07:11.312 "product_name": "Raid Volume", 00:07:11.312 "block_size": 512, 00:07:11.312 "num_blocks": 126976, 00:07:11.312 "uuid": "6b763199-8151-4ef0-8ce8-5cf13f2ea917", 00:07:11.312 "assigned_rate_limits": { 00:07:11.312 "rw_ios_per_sec": 0, 00:07:11.312 "rw_mbytes_per_sec": 0, 00:07:11.312 "r_mbytes_per_sec": 0, 00:07:11.312 "w_mbytes_per_sec": 0 00:07:11.312 }, 00:07:11.312 "claimed": false, 00:07:11.312 "zoned": false, 00:07:11.312 "supported_io_types": { 00:07:11.312 "read": true, 00:07:11.312 "write": true, 00:07:11.312 "unmap": true, 00:07:11.312 "flush": true, 00:07:11.312 "reset": true, 00:07:11.312 "nvme_admin": false, 00:07:11.312 "nvme_io": false, 00:07:11.312 "nvme_io_md": false, 00:07:11.312 "write_zeroes": true, 00:07:11.312 "zcopy": false, 00:07:11.312 "get_zone_info": false, 00:07:11.312 "zone_management": false, 00:07:11.312 "zone_append": false, 00:07:11.312 "compare": false, 00:07:11.312 "compare_and_write": false, 00:07:11.312 "abort": false, 00:07:11.312 "seek_hole": false, 00:07:11.312 "seek_data": false, 00:07:11.312 "copy": false, 00:07:11.312 "nvme_iov_md": false 00:07:11.312 }, 00:07:11.312 "memory_domains": [ 00:07:11.312 { 00:07:11.312 "dma_device_id": "system", 00:07:11.312 "dma_device_type": 1 00:07:11.312 }, 00:07:11.312 { 00:07:11.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.312 "dma_device_type": 2 00:07:11.312 }, 00:07:11.312 { 00:07:11.312 "dma_device_id": "system", 00:07:11.312 "dma_device_type": 1 00:07:11.312 }, 00:07:11.312 { 00:07:11.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.312 "dma_device_type": 2 00:07:11.312 } 00:07:11.312 ], 00:07:11.312 "driver_specific": { 00:07:11.312 "raid": { 00:07:11.312 "uuid": "6b763199-8151-4ef0-8ce8-5cf13f2ea917", 00:07:11.312 "strip_size_kb": 64, 00:07:11.312 "state": "online", 00:07:11.312 "raid_level": "concat", 00:07:11.312 "superblock": true, 00:07:11.312 "num_base_bdevs": 2, 00:07:11.312 "num_base_bdevs_discovered": 2, 00:07:11.312 "num_base_bdevs_operational": 2, 00:07:11.312 "base_bdevs_list": [ 00:07:11.313 { 00:07:11.313 "name": "pt1", 00:07:11.313 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:11.313 "is_configured": true, 00:07:11.313 "data_offset": 2048, 00:07:11.313 "data_size": 63488 00:07:11.313 }, 00:07:11.313 { 00:07:11.313 "name": "pt2", 00:07:11.313 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:11.313 "is_configured": true, 00:07:11.313 "data_offset": 2048, 00:07:11.313 "data_size": 63488 00:07:11.313 } 00:07:11.313 ] 00:07:11.313 } 00:07:11.313 } 00:07:11.313 }' 00:07:11.313 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:11.313 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:11.313 pt2' 00:07:11.313 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.572 [2024-12-14 04:56:22.312687] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6b763199-8151-4ef0-8ce8-5cf13f2ea917 '!=' 6b763199-8151-4ef0-8ce8-5cf13f2ea917 ']' 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73560 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73560 ']' 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73560 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73560 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73560' 00:07:11.572 killing process with pid 73560 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 73560 00:07:11.572 [2024-12-14 04:56:22.395775] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:11.572 [2024-12-14 04:56:22.395858] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.572 [2024-12-14 04:56:22.395908] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:11.572 [2024-12-14 04:56:22.395916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:11.572 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 73560 00:07:11.572 [2024-12-14 04:56:22.417957] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:11.831 ************************************ 00:07:11.831 END TEST raid_superblock_test 00:07:11.831 04:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:11.831 00:07:11.831 real 0m3.257s 00:07:11.831 user 0m4.978s 00:07:11.831 sys 0m0.691s 00:07:11.831 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.831 04:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.831 ************************************ 00:07:11.831 04:56:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:11.831 04:56:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:11.831 04:56:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.831 04:56:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:12.091 ************************************ 00:07:12.091 START TEST raid_read_error_test 00:07:12.091 ************************************ 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.P03oJ9Kcx2 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73755 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73755 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73755 ']' 00:07:12.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.091 04:56:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.091 [2024-12-14 04:56:22.819090] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:12.091 [2024-12-14 04:56:22.819343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73755 ] 00:07:12.351 [2024-12-14 04:56:22.979263] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.351 [2024-12-14 04:56:23.025470] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.351 [2024-12-14 04:56:23.067431] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.351 [2024-12-14 04:56:23.067467] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.919 BaseBdev1_malloc 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.919 true 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.919 [2024-12-14 04:56:23.697393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:12.919 [2024-12-14 04:56:23.697442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.919 [2024-12-14 04:56:23.697460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:12.919 [2024-12-14 04:56:23.697476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.919 [2024-12-14 04:56:23.699528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.919 [2024-12-14 04:56:23.699603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:12.919 BaseBdev1 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.919 BaseBdev2_malloc 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.919 true 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.919 [2024-12-14 04:56:23.747950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:12.919 [2024-12-14 04:56:23.748050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.919 [2024-12-14 04:56:23.748071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:12.919 [2024-12-14 04:56:23.748080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.919 [2024-12-14 04:56:23.750030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.919 [2024-12-14 04:56:23.750065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:12.919 BaseBdev2 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.919 [2024-12-14 04:56:23.759956] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:12.919 [2024-12-14 04:56:23.761746] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:12.919 [2024-12-14 04:56:23.761919] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:12.919 [2024-12-14 04:56:23.761932] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:12.919 [2024-12-14 04:56:23.762181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:12.919 [2024-12-14 04:56:23.762323] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:12.919 [2024-12-14 04:56:23.762336] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:12.919 [2024-12-14 04:56:23.762476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.919 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.178 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.178 "name": "raid_bdev1", 00:07:13.178 "uuid": "faaa4522-1f56-47c3-9ec7-da3a619c522e", 00:07:13.178 "strip_size_kb": 64, 00:07:13.178 "state": "online", 00:07:13.178 "raid_level": "concat", 00:07:13.178 "superblock": true, 00:07:13.178 "num_base_bdevs": 2, 00:07:13.178 "num_base_bdevs_discovered": 2, 00:07:13.178 "num_base_bdevs_operational": 2, 00:07:13.178 "base_bdevs_list": [ 00:07:13.178 { 00:07:13.178 "name": "BaseBdev1", 00:07:13.178 "uuid": "b33d1b3a-f4ce-5fd2-afc7-21e22045d433", 00:07:13.178 "is_configured": true, 00:07:13.178 "data_offset": 2048, 00:07:13.178 "data_size": 63488 00:07:13.178 }, 00:07:13.178 { 00:07:13.178 "name": "BaseBdev2", 00:07:13.178 "uuid": "ffa4595d-83da-57a2-b566-a1fec4384d21", 00:07:13.178 "is_configured": true, 00:07:13.178 "data_offset": 2048, 00:07:13.178 "data_size": 63488 00:07:13.178 } 00:07:13.178 ] 00:07:13.178 }' 00:07:13.178 04:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.178 04:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.437 04:56:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:13.437 04:56:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:13.437 [2024-12-14 04:56:24.315427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:14.376 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:14.376 04:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.376 04:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.376 04:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.376 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:14.376 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:14.376 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:14.376 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:14.376 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:14.376 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:14.376 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:14.376 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.376 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.376 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.376 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.376 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.376 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.376 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.376 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:14.376 04:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.376 04:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.635 04:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.635 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.635 "name": "raid_bdev1", 00:07:14.635 "uuid": "faaa4522-1f56-47c3-9ec7-da3a619c522e", 00:07:14.635 "strip_size_kb": 64, 00:07:14.635 "state": "online", 00:07:14.635 "raid_level": "concat", 00:07:14.635 "superblock": true, 00:07:14.635 "num_base_bdevs": 2, 00:07:14.635 "num_base_bdevs_discovered": 2, 00:07:14.635 "num_base_bdevs_operational": 2, 00:07:14.635 "base_bdevs_list": [ 00:07:14.635 { 00:07:14.635 "name": "BaseBdev1", 00:07:14.635 "uuid": "b33d1b3a-f4ce-5fd2-afc7-21e22045d433", 00:07:14.635 "is_configured": true, 00:07:14.635 "data_offset": 2048, 00:07:14.635 "data_size": 63488 00:07:14.635 }, 00:07:14.635 { 00:07:14.635 "name": "BaseBdev2", 00:07:14.635 "uuid": "ffa4595d-83da-57a2-b566-a1fec4384d21", 00:07:14.635 "is_configured": true, 00:07:14.635 "data_offset": 2048, 00:07:14.635 "data_size": 63488 00:07:14.635 } 00:07:14.635 ] 00:07:14.635 }' 00:07:14.635 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.635 04:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.895 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:14.895 04:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.895 04:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.895 [2024-12-14 04:56:25.655232] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:14.895 [2024-12-14 04:56:25.655259] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:14.895 [2024-12-14 04:56:25.657685] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:14.895 [2024-12-14 04:56:25.657731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.895 [2024-12-14 04:56:25.657765] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:14.895 [2024-12-14 04:56:25.657774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:14.895 { 00:07:14.895 "results": [ 00:07:14.895 { 00:07:14.895 "job": "raid_bdev1", 00:07:14.895 "core_mask": "0x1", 00:07:14.895 "workload": "randrw", 00:07:14.895 "percentage": 50, 00:07:14.895 "status": "finished", 00:07:14.895 "queue_depth": 1, 00:07:14.895 "io_size": 131072, 00:07:14.895 "runtime": 1.340587, 00:07:14.895 "iops": 17940.648387609308, 00:07:14.895 "mibps": 2242.5810484511635, 00:07:14.895 "io_failed": 1, 00:07:14.895 "io_timeout": 0, 00:07:14.895 "avg_latency_us": 77.14222910041345, 00:07:14.895 "min_latency_us": 24.482096069868994, 00:07:14.895 "max_latency_us": 1345.0620087336245 00:07:14.895 } 00:07:14.895 ], 00:07:14.895 "core_count": 1 00:07:14.895 } 00:07:14.895 04:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.895 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73755 00:07:14.895 04:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73755 ']' 00:07:14.895 04:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73755 00:07:14.895 04:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:14.895 04:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.895 04:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73755 00:07:14.895 killing process with pid 73755 00:07:14.895 04:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.895 04:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.895 04:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73755' 00:07:14.895 04:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73755 00:07:14.895 [2024-12-14 04:56:25.704423] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:14.895 04:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73755 00:07:14.895 [2024-12-14 04:56:25.719444] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:15.155 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:15.155 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.P03oJ9Kcx2 00:07:15.155 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:15.155 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:15.155 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:15.155 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:15.155 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:15.155 ************************************ 00:07:15.155 END TEST raid_read_error_test 00:07:15.155 ************************************ 00:07:15.155 04:56:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:15.155 00:07:15.155 real 0m3.242s 00:07:15.155 user 0m4.151s 00:07:15.155 sys 0m0.472s 00:07:15.155 04:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.155 04:56:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.155 04:56:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:15.155 04:56:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:15.155 04:56:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.155 04:56:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:15.155 ************************************ 00:07:15.155 START TEST raid_write_error_test 00:07:15.155 ************************************ 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5brWqpIoed 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73884 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73884 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73884 ']' 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.416 04:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.416 [2024-12-14 04:56:26.135260] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:15.416 [2024-12-14 04:56:26.135466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73884 ] 00:07:15.416 [2024-12-14 04:56:26.295180] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.675 [2024-12-14 04:56:26.339614] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.675 [2024-12-14 04:56:26.380774] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.675 [2024-12-14 04:56:26.380810] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.245 04:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.245 04:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:16.245 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:16.245 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:16.245 04:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.245 04:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.245 BaseBdev1_malloc 00:07:16.245 04:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.245 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:16.246 04:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.246 04:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.246 true 00:07:16.246 04:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.246 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:16.246 04:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.246 04:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.246 [2024-12-14 04:56:26.982383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:16.246 [2024-12-14 04:56:26.982434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.246 [2024-12-14 04:56:26.982468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:16.246 [2024-12-14 04:56:26.982476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.246 [2024-12-14 04:56:26.984490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.246 [2024-12-14 04:56:26.984598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:16.246 BaseBdev1 00:07:16.246 04:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.246 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:16.246 04:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:16.246 04:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.246 04:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.246 BaseBdev2_malloc 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.246 true 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.246 [2024-12-14 04:56:27.040721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:16.246 [2024-12-14 04:56:27.040793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.246 [2024-12-14 04:56:27.040823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:16.246 [2024-12-14 04:56:27.040838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.246 [2024-12-14 04:56:27.043323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.246 [2024-12-14 04:56:27.043363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:16.246 BaseBdev2 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.246 [2024-12-14 04:56:27.052682] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:16.246 [2024-12-14 04:56:27.054488] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:16.246 [2024-12-14 04:56:27.054652] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:16.246 [2024-12-14 04:56:27.054664] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:16.246 [2024-12-14 04:56:27.054918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:16.246 [2024-12-14 04:56:27.055050] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:16.246 [2024-12-14 04:56:27.055062] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:16.246 [2024-12-14 04:56:27.055222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.246 "name": "raid_bdev1", 00:07:16.246 "uuid": "c29c37bc-d939-4432-89a5-24fa30a0ba4d", 00:07:16.246 "strip_size_kb": 64, 00:07:16.246 "state": "online", 00:07:16.246 "raid_level": "concat", 00:07:16.246 "superblock": true, 00:07:16.246 "num_base_bdevs": 2, 00:07:16.246 "num_base_bdevs_discovered": 2, 00:07:16.246 "num_base_bdevs_operational": 2, 00:07:16.246 "base_bdevs_list": [ 00:07:16.246 { 00:07:16.246 "name": "BaseBdev1", 00:07:16.246 "uuid": "12e0a589-2831-5ec8-aeac-40eb5d9c12d6", 00:07:16.246 "is_configured": true, 00:07:16.246 "data_offset": 2048, 00:07:16.246 "data_size": 63488 00:07:16.246 }, 00:07:16.246 { 00:07:16.246 "name": "BaseBdev2", 00:07:16.246 "uuid": "af6c9d74-ffab-526b-aa41-b58a660cc325", 00:07:16.246 "is_configured": true, 00:07:16.246 "data_offset": 2048, 00:07:16.246 "data_size": 63488 00:07:16.246 } 00:07:16.246 ] 00:07:16.246 }' 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.246 04:56:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.815 04:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:16.815 04:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:16.815 [2024-12-14 04:56:27.620041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.754 "name": "raid_bdev1", 00:07:17.754 "uuid": "c29c37bc-d939-4432-89a5-24fa30a0ba4d", 00:07:17.754 "strip_size_kb": 64, 00:07:17.754 "state": "online", 00:07:17.754 "raid_level": "concat", 00:07:17.754 "superblock": true, 00:07:17.754 "num_base_bdevs": 2, 00:07:17.754 "num_base_bdevs_discovered": 2, 00:07:17.754 "num_base_bdevs_operational": 2, 00:07:17.754 "base_bdevs_list": [ 00:07:17.754 { 00:07:17.754 "name": "BaseBdev1", 00:07:17.754 "uuid": "12e0a589-2831-5ec8-aeac-40eb5d9c12d6", 00:07:17.754 "is_configured": true, 00:07:17.754 "data_offset": 2048, 00:07:17.754 "data_size": 63488 00:07:17.754 }, 00:07:17.754 { 00:07:17.754 "name": "BaseBdev2", 00:07:17.754 "uuid": "af6c9d74-ffab-526b-aa41-b58a660cc325", 00:07:17.754 "is_configured": true, 00:07:17.754 "data_offset": 2048, 00:07:17.754 "data_size": 63488 00:07:17.754 } 00:07:17.754 ] 00:07:17.754 }' 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.754 04:56:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.333 04:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:18.333 04:56:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.333 04:56:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.333 [2024-12-14 04:56:28.991597] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:18.333 [2024-12-14 04:56:28.991628] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:18.333 [2024-12-14 04:56:28.994121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:18.333 [2024-12-14 04:56:28.994202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.333 [2024-12-14 04:56:28.994263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:18.333 [2024-12-14 04:56:28.994310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:18.334 { 00:07:18.334 "results": [ 00:07:18.334 { 00:07:18.334 "job": "raid_bdev1", 00:07:18.334 "core_mask": "0x1", 00:07:18.334 "workload": "randrw", 00:07:18.334 "percentage": 50, 00:07:18.334 "status": "finished", 00:07:18.334 "queue_depth": 1, 00:07:18.334 "io_size": 131072, 00:07:18.334 "runtime": 1.372427, 00:07:18.334 "iops": 17938.294714400112, 00:07:18.334 "mibps": 2242.286839300014, 00:07:18.334 "io_failed": 1, 00:07:18.334 "io_timeout": 0, 00:07:18.334 "avg_latency_us": 77.01784823642511, 00:07:18.334 "min_latency_us": 24.146724890829695, 00:07:18.334 "max_latency_us": 1359.3711790393013 00:07:18.334 } 00:07:18.334 ], 00:07:18.334 "core_count": 1 00:07:18.334 } 00:07:18.334 04:56:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.334 04:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73884 00:07:18.334 04:56:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73884 ']' 00:07:18.334 04:56:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73884 00:07:18.334 04:56:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:18.334 04:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.334 04:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73884 00:07:18.334 04:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.334 04:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.334 04:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73884' 00:07:18.334 killing process with pid 73884 00:07:18.334 04:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73884 00:07:18.334 [2024-12-14 04:56:29.041460] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:18.334 04:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73884 00:07:18.334 [2024-12-14 04:56:29.057118] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:18.619 04:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5brWqpIoed 00:07:18.619 04:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:18.619 04:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:18.619 04:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:18.619 04:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:18.619 ************************************ 00:07:18.619 END TEST raid_write_error_test 00:07:18.619 ************************************ 00:07:18.619 04:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:18.619 04:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:18.619 04:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:18.619 00:07:18.619 real 0m3.265s 00:07:18.619 user 0m4.152s 00:07:18.619 sys 0m0.506s 00:07:18.619 04:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.619 04:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.619 04:56:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:18.619 04:56:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:18.619 04:56:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:18.619 04:56:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.619 04:56:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:18.619 ************************************ 00:07:18.619 START TEST raid_state_function_test 00:07:18.619 ************************************ 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74017 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74017' 00:07:18.619 Process raid pid: 74017 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74017 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 74017 ']' 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.619 04:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.619 [2024-12-14 04:56:29.461005] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:18.619 [2024-12-14 04:56:29.461233] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.891 [2024-12-14 04:56:29.620028] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.891 [2024-12-14 04:56:29.666387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.891 [2024-12-14 04:56:29.709504] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.891 [2024-12-14 04:56:29.709624] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.459 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.459 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:19.459 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:19.459 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.459 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.459 [2024-12-14 04:56:30.283116] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:19.459 [2024-12-14 04:56:30.283256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:19.459 [2024-12-14 04:56:30.283289] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:19.459 [2024-12-14 04:56:30.283313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:19.459 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.459 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:19.459 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.459 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.459 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:19.459 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:19.459 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.459 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.459 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.459 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.459 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.460 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.460 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.460 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.460 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.460 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.460 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.460 "name": "Existed_Raid", 00:07:19.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.460 "strip_size_kb": 0, 00:07:19.460 "state": "configuring", 00:07:19.460 "raid_level": "raid1", 00:07:19.460 "superblock": false, 00:07:19.460 "num_base_bdevs": 2, 00:07:19.460 "num_base_bdevs_discovered": 0, 00:07:19.460 "num_base_bdevs_operational": 2, 00:07:19.460 "base_bdevs_list": [ 00:07:19.460 { 00:07:19.460 "name": "BaseBdev1", 00:07:19.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.460 "is_configured": false, 00:07:19.460 "data_offset": 0, 00:07:19.460 "data_size": 0 00:07:19.460 }, 00:07:19.460 { 00:07:19.460 "name": "BaseBdev2", 00:07:19.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.460 "is_configured": false, 00:07:19.460 "data_offset": 0, 00:07:19.460 "data_size": 0 00:07:19.460 } 00:07:19.460 ] 00:07:19.460 }' 00:07:19.460 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.460 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.028 [2024-12-14 04:56:30.710302] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:20.028 [2024-12-14 04:56:30.710345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.028 [2024-12-14 04:56:30.722309] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:20.028 [2024-12-14 04:56:30.722348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:20.028 [2024-12-14 04:56:30.722356] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.028 [2024-12-14 04:56:30.722365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.028 [2024-12-14 04:56:30.742989] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:20.028 BaseBdev1 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.028 [ 00:07:20.028 { 00:07:20.028 "name": "BaseBdev1", 00:07:20.028 "aliases": [ 00:07:20.028 "53f76de8-a352-4285-a6f3-ae66318bfc00" 00:07:20.028 ], 00:07:20.028 "product_name": "Malloc disk", 00:07:20.028 "block_size": 512, 00:07:20.028 "num_blocks": 65536, 00:07:20.028 "uuid": "53f76de8-a352-4285-a6f3-ae66318bfc00", 00:07:20.028 "assigned_rate_limits": { 00:07:20.028 "rw_ios_per_sec": 0, 00:07:20.028 "rw_mbytes_per_sec": 0, 00:07:20.028 "r_mbytes_per_sec": 0, 00:07:20.028 "w_mbytes_per_sec": 0 00:07:20.028 }, 00:07:20.028 "claimed": true, 00:07:20.028 "claim_type": "exclusive_write", 00:07:20.028 "zoned": false, 00:07:20.028 "supported_io_types": { 00:07:20.028 "read": true, 00:07:20.028 "write": true, 00:07:20.028 "unmap": true, 00:07:20.028 "flush": true, 00:07:20.028 "reset": true, 00:07:20.028 "nvme_admin": false, 00:07:20.028 "nvme_io": false, 00:07:20.028 "nvme_io_md": false, 00:07:20.028 "write_zeroes": true, 00:07:20.028 "zcopy": true, 00:07:20.028 "get_zone_info": false, 00:07:20.028 "zone_management": false, 00:07:20.028 "zone_append": false, 00:07:20.028 "compare": false, 00:07:20.028 "compare_and_write": false, 00:07:20.028 "abort": true, 00:07:20.028 "seek_hole": false, 00:07:20.028 "seek_data": false, 00:07:20.028 "copy": true, 00:07:20.028 "nvme_iov_md": false 00:07:20.028 }, 00:07:20.028 "memory_domains": [ 00:07:20.028 { 00:07:20.028 "dma_device_id": "system", 00:07:20.028 "dma_device_type": 1 00:07:20.028 }, 00:07:20.028 { 00:07:20.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.028 "dma_device_type": 2 00:07:20.028 } 00:07:20.028 ], 00:07:20.028 "driver_specific": {} 00:07:20.028 } 00:07:20.028 ] 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.028 "name": "Existed_Raid", 00:07:20.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.028 "strip_size_kb": 0, 00:07:20.028 "state": "configuring", 00:07:20.028 "raid_level": "raid1", 00:07:20.028 "superblock": false, 00:07:20.028 "num_base_bdevs": 2, 00:07:20.028 "num_base_bdevs_discovered": 1, 00:07:20.028 "num_base_bdevs_operational": 2, 00:07:20.028 "base_bdevs_list": [ 00:07:20.028 { 00:07:20.028 "name": "BaseBdev1", 00:07:20.028 "uuid": "53f76de8-a352-4285-a6f3-ae66318bfc00", 00:07:20.028 "is_configured": true, 00:07:20.028 "data_offset": 0, 00:07:20.028 "data_size": 65536 00:07:20.028 }, 00:07:20.028 { 00:07:20.028 "name": "BaseBdev2", 00:07:20.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.028 "is_configured": false, 00:07:20.028 "data_offset": 0, 00:07:20.028 "data_size": 0 00:07:20.028 } 00:07:20.028 ] 00:07:20.028 }' 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.028 04:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.287 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:20.287 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.287 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.287 [2024-12-14 04:56:31.134337] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:20.287 [2024-12-14 04:56:31.134437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:20.287 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.287 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:20.287 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.287 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.287 [2024-12-14 04:56:31.142364] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:20.287 [2024-12-14 04:56:31.144170] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.287 [2024-12-14 04:56:31.144227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.287 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.287 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:20.287 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:20.287 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:20.287 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.287 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:20.287 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:20.288 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:20.288 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.288 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.288 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.288 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.288 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.288 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.288 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.288 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.288 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.546 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.546 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.546 "name": "Existed_Raid", 00:07:20.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.546 "strip_size_kb": 0, 00:07:20.546 "state": "configuring", 00:07:20.546 "raid_level": "raid1", 00:07:20.546 "superblock": false, 00:07:20.546 "num_base_bdevs": 2, 00:07:20.546 "num_base_bdevs_discovered": 1, 00:07:20.546 "num_base_bdevs_operational": 2, 00:07:20.546 "base_bdevs_list": [ 00:07:20.546 { 00:07:20.546 "name": "BaseBdev1", 00:07:20.546 "uuid": "53f76de8-a352-4285-a6f3-ae66318bfc00", 00:07:20.546 "is_configured": true, 00:07:20.546 "data_offset": 0, 00:07:20.546 "data_size": 65536 00:07:20.546 }, 00:07:20.546 { 00:07:20.546 "name": "BaseBdev2", 00:07:20.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.546 "is_configured": false, 00:07:20.546 "data_offset": 0, 00:07:20.546 "data_size": 0 00:07:20.546 } 00:07:20.546 ] 00:07:20.546 }' 00:07:20.546 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.546 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.805 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:20.805 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.805 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.805 [2024-12-14 04:56:31.590270] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:20.805 [2024-12-14 04:56:31.590558] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:20.805 [2024-12-14 04:56:31.590660] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:20.805 [2024-12-14 04:56:31.591736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:20.805 [2024-12-14 04:56:31.592363] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:20.805 [2024-12-14 04:56:31.592558] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:20.805 [2024-12-14 04:56:31.593247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.805 BaseBdev2 00:07:20.805 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.805 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:20.805 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:20.805 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:20.805 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:20.805 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:20.805 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:20.805 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:20.805 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.806 [ 00:07:20.806 { 00:07:20.806 "name": "BaseBdev2", 00:07:20.806 "aliases": [ 00:07:20.806 "f46eeaaf-c6c0-4f12-bedd-69a2f84e57f5" 00:07:20.806 ], 00:07:20.806 "product_name": "Malloc disk", 00:07:20.806 "block_size": 512, 00:07:20.806 "num_blocks": 65536, 00:07:20.806 "uuid": "f46eeaaf-c6c0-4f12-bedd-69a2f84e57f5", 00:07:20.806 "assigned_rate_limits": { 00:07:20.806 "rw_ios_per_sec": 0, 00:07:20.806 "rw_mbytes_per_sec": 0, 00:07:20.806 "r_mbytes_per_sec": 0, 00:07:20.806 "w_mbytes_per_sec": 0 00:07:20.806 }, 00:07:20.806 "claimed": true, 00:07:20.806 "claim_type": "exclusive_write", 00:07:20.806 "zoned": false, 00:07:20.806 "supported_io_types": { 00:07:20.806 "read": true, 00:07:20.806 "write": true, 00:07:20.806 "unmap": true, 00:07:20.806 "flush": true, 00:07:20.806 "reset": true, 00:07:20.806 "nvme_admin": false, 00:07:20.806 "nvme_io": false, 00:07:20.806 "nvme_io_md": false, 00:07:20.806 "write_zeroes": true, 00:07:20.806 "zcopy": true, 00:07:20.806 "get_zone_info": false, 00:07:20.806 "zone_management": false, 00:07:20.806 "zone_append": false, 00:07:20.806 "compare": false, 00:07:20.806 "compare_and_write": false, 00:07:20.806 "abort": true, 00:07:20.806 "seek_hole": false, 00:07:20.806 "seek_data": false, 00:07:20.806 "copy": true, 00:07:20.806 "nvme_iov_md": false 00:07:20.806 }, 00:07:20.806 "memory_domains": [ 00:07:20.806 { 00:07:20.806 "dma_device_id": "system", 00:07:20.806 "dma_device_type": 1 00:07:20.806 }, 00:07:20.806 { 00:07:20.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.806 "dma_device_type": 2 00:07:20.806 } 00:07:20.806 ], 00:07:20.806 "driver_specific": {} 00:07:20.806 } 00:07:20.806 ] 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.806 "name": "Existed_Raid", 00:07:20.806 "uuid": "094bc5e3-fafb-4ea6-a848-88a291aa0e83", 00:07:20.806 "strip_size_kb": 0, 00:07:20.806 "state": "online", 00:07:20.806 "raid_level": "raid1", 00:07:20.806 "superblock": false, 00:07:20.806 "num_base_bdevs": 2, 00:07:20.806 "num_base_bdevs_discovered": 2, 00:07:20.806 "num_base_bdevs_operational": 2, 00:07:20.806 "base_bdevs_list": [ 00:07:20.806 { 00:07:20.806 "name": "BaseBdev1", 00:07:20.806 "uuid": "53f76de8-a352-4285-a6f3-ae66318bfc00", 00:07:20.806 "is_configured": true, 00:07:20.806 "data_offset": 0, 00:07:20.806 "data_size": 65536 00:07:20.806 }, 00:07:20.806 { 00:07:20.806 "name": "BaseBdev2", 00:07:20.806 "uuid": "f46eeaaf-c6c0-4f12-bedd-69a2f84e57f5", 00:07:20.806 "is_configured": true, 00:07:20.806 "data_offset": 0, 00:07:20.806 "data_size": 65536 00:07:20.806 } 00:07:20.806 ] 00:07:20.806 }' 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.806 04:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:21.376 [2024-12-14 04:56:32.077610] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:21.376 "name": "Existed_Raid", 00:07:21.376 "aliases": [ 00:07:21.376 "094bc5e3-fafb-4ea6-a848-88a291aa0e83" 00:07:21.376 ], 00:07:21.376 "product_name": "Raid Volume", 00:07:21.376 "block_size": 512, 00:07:21.376 "num_blocks": 65536, 00:07:21.376 "uuid": "094bc5e3-fafb-4ea6-a848-88a291aa0e83", 00:07:21.376 "assigned_rate_limits": { 00:07:21.376 "rw_ios_per_sec": 0, 00:07:21.376 "rw_mbytes_per_sec": 0, 00:07:21.376 "r_mbytes_per_sec": 0, 00:07:21.376 "w_mbytes_per_sec": 0 00:07:21.376 }, 00:07:21.376 "claimed": false, 00:07:21.376 "zoned": false, 00:07:21.376 "supported_io_types": { 00:07:21.376 "read": true, 00:07:21.376 "write": true, 00:07:21.376 "unmap": false, 00:07:21.376 "flush": false, 00:07:21.376 "reset": true, 00:07:21.376 "nvme_admin": false, 00:07:21.376 "nvme_io": false, 00:07:21.376 "nvme_io_md": false, 00:07:21.376 "write_zeroes": true, 00:07:21.376 "zcopy": false, 00:07:21.376 "get_zone_info": false, 00:07:21.376 "zone_management": false, 00:07:21.376 "zone_append": false, 00:07:21.376 "compare": false, 00:07:21.376 "compare_and_write": false, 00:07:21.376 "abort": false, 00:07:21.376 "seek_hole": false, 00:07:21.376 "seek_data": false, 00:07:21.376 "copy": false, 00:07:21.376 "nvme_iov_md": false 00:07:21.376 }, 00:07:21.376 "memory_domains": [ 00:07:21.376 { 00:07:21.376 "dma_device_id": "system", 00:07:21.376 "dma_device_type": 1 00:07:21.376 }, 00:07:21.376 { 00:07:21.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.376 "dma_device_type": 2 00:07:21.376 }, 00:07:21.376 { 00:07:21.376 "dma_device_id": "system", 00:07:21.376 "dma_device_type": 1 00:07:21.376 }, 00:07:21.376 { 00:07:21.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.376 "dma_device_type": 2 00:07:21.376 } 00:07:21.376 ], 00:07:21.376 "driver_specific": { 00:07:21.376 "raid": { 00:07:21.376 "uuid": "094bc5e3-fafb-4ea6-a848-88a291aa0e83", 00:07:21.376 "strip_size_kb": 0, 00:07:21.376 "state": "online", 00:07:21.376 "raid_level": "raid1", 00:07:21.376 "superblock": false, 00:07:21.376 "num_base_bdevs": 2, 00:07:21.376 "num_base_bdevs_discovered": 2, 00:07:21.376 "num_base_bdevs_operational": 2, 00:07:21.376 "base_bdevs_list": [ 00:07:21.376 { 00:07:21.376 "name": "BaseBdev1", 00:07:21.376 "uuid": "53f76de8-a352-4285-a6f3-ae66318bfc00", 00:07:21.376 "is_configured": true, 00:07:21.376 "data_offset": 0, 00:07:21.376 "data_size": 65536 00:07:21.376 }, 00:07:21.376 { 00:07:21.376 "name": "BaseBdev2", 00:07:21.376 "uuid": "f46eeaaf-c6c0-4f12-bedd-69a2f84e57f5", 00:07:21.376 "is_configured": true, 00:07:21.376 "data_offset": 0, 00:07:21.376 "data_size": 65536 00:07:21.376 } 00:07:21.376 ] 00:07:21.376 } 00:07:21.376 } 00:07:21.376 }' 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:21.376 BaseBdev2' 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.376 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.636 [2024-12-14 04:56:32.257093] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.636 "name": "Existed_Raid", 00:07:21.636 "uuid": "094bc5e3-fafb-4ea6-a848-88a291aa0e83", 00:07:21.636 "strip_size_kb": 0, 00:07:21.636 "state": "online", 00:07:21.636 "raid_level": "raid1", 00:07:21.636 "superblock": false, 00:07:21.636 "num_base_bdevs": 2, 00:07:21.636 "num_base_bdevs_discovered": 1, 00:07:21.636 "num_base_bdevs_operational": 1, 00:07:21.636 "base_bdevs_list": [ 00:07:21.636 { 00:07:21.636 "name": null, 00:07:21.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.636 "is_configured": false, 00:07:21.636 "data_offset": 0, 00:07:21.636 "data_size": 65536 00:07:21.636 }, 00:07:21.636 { 00:07:21.636 "name": "BaseBdev2", 00:07:21.636 "uuid": "f46eeaaf-c6c0-4f12-bedd-69a2f84e57f5", 00:07:21.636 "is_configured": true, 00:07:21.636 "data_offset": 0, 00:07:21.636 "data_size": 65536 00:07:21.636 } 00:07:21.636 ] 00:07:21.636 }' 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.636 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.896 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:21.896 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:21.896 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:21.896 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.896 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.896 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.896 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.896 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:21.896 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:21.896 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:21.896 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.896 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.896 [2024-12-14 04:56:32.751571] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:21.896 [2024-12-14 04:56:32.751660] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:21.896 [2024-12-14 04:56:32.763068] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.896 [2024-12-14 04:56:32.763118] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:21.896 [2024-12-14 04:56:32.763145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:21.896 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.896 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:21.896 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:21.896 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:21.896 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.896 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.896 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.156 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.156 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:22.156 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:22.156 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:22.156 04:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74017 00:07:22.156 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 74017 ']' 00:07:22.156 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 74017 00:07:22.156 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:22.156 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.156 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74017 00:07:22.156 killing process with pid 74017 00:07:22.156 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:22.156 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:22.156 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74017' 00:07:22.156 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 74017 00:07:22.156 [2024-12-14 04:56:32.845950] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:22.156 04:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 74017 00:07:22.156 [2024-12-14 04:56:32.846907] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:22.416 00:07:22.416 real 0m3.711s 00:07:22.416 user 0m5.805s 00:07:22.416 sys 0m0.726s 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.416 ************************************ 00:07:22.416 END TEST raid_state_function_test 00:07:22.416 ************************************ 00:07:22.416 04:56:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:22.416 04:56:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:22.416 04:56:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.416 04:56:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:22.416 ************************************ 00:07:22.416 START TEST raid_state_function_test_sb 00:07:22.416 ************************************ 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:22.416 Process raid pid: 74253 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74253 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:22.416 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74253' 00:07:22.417 04:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74253 00:07:22.417 04:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 74253 ']' 00:07:22.417 04:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.417 04:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.417 04:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.417 04:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.417 04:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.417 [2024-12-14 04:56:33.242204] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:22.417 [2024-12-14 04:56:33.242425] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.676 [2024-12-14 04:56:33.400751] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.676 [2024-12-14 04:56:33.445333] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.676 [2024-12-14 04:56:33.487439] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.676 [2024-12-14 04:56:33.487551] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.245 [2024-12-14 04:56:34.060477] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:23.245 [2024-12-14 04:56:34.060590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:23.245 [2024-12-14 04:56:34.060614] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:23.245 [2024-12-14 04:56:34.060624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.245 "name": "Existed_Raid", 00:07:23.245 "uuid": "0bcaafec-41b5-4a83-9f4e-33eb6f639afa", 00:07:23.245 "strip_size_kb": 0, 00:07:23.245 "state": "configuring", 00:07:23.245 "raid_level": "raid1", 00:07:23.245 "superblock": true, 00:07:23.245 "num_base_bdevs": 2, 00:07:23.245 "num_base_bdevs_discovered": 0, 00:07:23.245 "num_base_bdevs_operational": 2, 00:07:23.245 "base_bdevs_list": [ 00:07:23.245 { 00:07:23.245 "name": "BaseBdev1", 00:07:23.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.245 "is_configured": false, 00:07:23.245 "data_offset": 0, 00:07:23.245 "data_size": 0 00:07:23.245 }, 00:07:23.245 { 00:07:23.245 "name": "BaseBdev2", 00:07:23.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.245 "is_configured": false, 00:07:23.245 "data_offset": 0, 00:07:23.245 "data_size": 0 00:07:23.245 } 00:07:23.245 ] 00:07:23.245 }' 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.245 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.814 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:23.814 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.814 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.814 [2024-12-14 04:56:34.487652] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:23.814 [2024-12-14 04:56:34.487752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:23.814 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.814 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:23.814 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.814 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.814 [2024-12-14 04:56:34.495683] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:23.814 [2024-12-14 04:56:34.495759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:23.814 [2024-12-14 04:56:34.495792] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:23.814 [2024-12-14 04:56:34.495819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:23.814 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.814 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:23.814 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.814 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.814 [2024-12-14 04:56:34.512498] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:23.814 BaseBdev1 00:07:23.814 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.814 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:23.814 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.815 [ 00:07:23.815 { 00:07:23.815 "name": "BaseBdev1", 00:07:23.815 "aliases": [ 00:07:23.815 "e618c6f8-9552-4ce9-93bd-478113e8abe3" 00:07:23.815 ], 00:07:23.815 "product_name": "Malloc disk", 00:07:23.815 "block_size": 512, 00:07:23.815 "num_blocks": 65536, 00:07:23.815 "uuid": "e618c6f8-9552-4ce9-93bd-478113e8abe3", 00:07:23.815 "assigned_rate_limits": { 00:07:23.815 "rw_ios_per_sec": 0, 00:07:23.815 "rw_mbytes_per_sec": 0, 00:07:23.815 "r_mbytes_per_sec": 0, 00:07:23.815 "w_mbytes_per_sec": 0 00:07:23.815 }, 00:07:23.815 "claimed": true, 00:07:23.815 "claim_type": "exclusive_write", 00:07:23.815 "zoned": false, 00:07:23.815 "supported_io_types": { 00:07:23.815 "read": true, 00:07:23.815 "write": true, 00:07:23.815 "unmap": true, 00:07:23.815 "flush": true, 00:07:23.815 "reset": true, 00:07:23.815 "nvme_admin": false, 00:07:23.815 "nvme_io": false, 00:07:23.815 "nvme_io_md": false, 00:07:23.815 "write_zeroes": true, 00:07:23.815 "zcopy": true, 00:07:23.815 "get_zone_info": false, 00:07:23.815 "zone_management": false, 00:07:23.815 "zone_append": false, 00:07:23.815 "compare": false, 00:07:23.815 "compare_and_write": false, 00:07:23.815 "abort": true, 00:07:23.815 "seek_hole": false, 00:07:23.815 "seek_data": false, 00:07:23.815 "copy": true, 00:07:23.815 "nvme_iov_md": false 00:07:23.815 }, 00:07:23.815 "memory_domains": [ 00:07:23.815 { 00:07:23.815 "dma_device_id": "system", 00:07:23.815 "dma_device_type": 1 00:07:23.815 }, 00:07:23.815 { 00:07:23.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.815 "dma_device_type": 2 00:07:23.815 } 00:07:23.815 ], 00:07:23.815 "driver_specific": {} 00:07:23.815 } 00:07:23.815 ] 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.815 "name": "Existed_Raid", 00:07:23.815 "uuid": "4236d602-7f8b-400f-93bb-44c103d073a4", 00:07:23.815 "strip_size_kb": 0, 00:07:23.815 "state": "configuring", 00:07:23.815 "raid_level": "raid1", 00:07:23.815 "superblock": true, 00:07:23.815 "num_base_bdevs": 2, 00:07:23.815 "num_base_bdevs_discovered": 1, 00:07:23.815 "num_base_bdevs_operational": 2, 00:07:23.815 "base_bdevs_list": [ 00:07:23.815 { 00:07:23.815 "name": "BaseBdev1", 00:07:23.815 "uuid": "e618c6f8-9552-4ce9-93bd-478113e8abe3", 00:07:23.815 "is_configured": true, 00:07:23.815 "data_offset": 2048, 00:07:23.815 "data_size": 63488 00:07:23.815 }, 00:07:23.815 { 00:07:23.815 "name": "BaseBdev2", 00:07:23.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.815 "is_configured": false, 00:07:23.815 "data_offset": 0, 00:07:23.815 "data_size": 0 00:07:23.815 } 00:07:23.815 ] 00:07:23.815 }' 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.815 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.383 [2024-12-14 04:56:34.971731] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:24.383 [2024-12-14 04:56:34.971775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.383 [2024-12-14 04:56:34.983752] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:24.383 [2024-12-14 04:56:34.985551] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:24.383 [2024-12-14 04:56:34.985626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.383 04:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.383 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.383 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.383 "name": "Existed_Raid", 00:07:24.383 "uuid": "d4e03c74-b133-42ef-9e5e-042d839b6afc", 00:07:24.383 "strip_size_kb": 0, 00:07:24.383 "state": "configuring", 00:07:24.383 "raid_level": "raid1", 00:07:24.383 "superblock": true, 00:07:24.383 "num_base_bdevs": 2, 00:07:24.383 "num_base_bdevs_discovered": 1, 00:07:24.383 "num_base_bdevs_operational": 2, 00:07:24.383 "base_bdevs_list": [ 00:07:24.383 { 00:07:24.383 "name": "BaseBdev1", 00:07:24.383 "uuid": "e618c6f8-9552-4ce9-93bd-478113e8abe3", 00:07:24.383 "is_configured": true, 00:07:24.383 "data_offset": 2048, 00:07:24.383 "data_size": 63488 00:07:24.383 }, 00:07:24.383 { 00:07:24.383 "name": "BaseBdev2", 00:07:24.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.383 "is_configured": false, 00:07:24.383 "data_offset": 0, 00:07:24.384 "data_size": 0 00:07:24.384 } 00:07:24.384 ] 00:07:24.384 }' 00:07:24.384 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.384 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.642 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:24.642 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.642 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.642 [2024-12-14 04:56:35.402822] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:24.642 [2024-12-14 04:56:35.403611] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:24.642 [2024-12-14 04:56:35.403794] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:24.642 BaseBdev2 00:07:24.642 [2024-12-14 04:56:35.404830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:24.642 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.642 [2024-12-14 04:56:35.405348] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:24.642 [2024-12-14 04:56:35.405502] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:24.642 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:24.643 [2024-12-14 04:56:35.405966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.643 [ 00:07:24.643 { 00:07:24.643 "name": "BaseBdev2", 00:07:24.643 "aliases": [ 00:07:24.643 "2d8d53de-1ad9-4597-beb5-1ae08a74f997" 00:07:24.643 ], 00:07:24.643 "product_name": "Malloc disk", 00:07:24.643 "block_size": 512, 00:07:24.643 "num_blocks": 65536, 00:07:24.643 "uuid": "2d8d53de-1ad9-4597-beb5-1ae08a74f997", 00:07:24.643 "assigned_rate_limits": { 00:07:24.643 "rw_ios_per_sec": 0, 00:07:24.643 "rw_mbytes_per_sec": 0, 00:07:24.643 "r_mbytes_per_sec": 0, 00:07:24.643 "w_mbytes_per_sec": 0 00:07:24.643 }, 00:07:24.643 "claimed": true, 00:07:24.643 "claim_type": "exclusive_write", 00:07:24.643 "zoned": false, 00:07:24.643 "supported_io_types": { 00:07:24.643 "read": true, 00:07:24.643 "write": true, 00:07:24.643 "unmap": true, 00:07:24.643 "flush": true, 00:07:24.643 "reset": true, 00:07:24.643 "nvme_admin": false, 00:07:24.643 "nvme_io": false, 00:07:24.643 "nvme_io_md": false, 00:07:24.643 "write_zeroes": true, 00:07:24.643 "zcopy": true, 00:07:24.643 "get_zone_info": false, 00:07:24.643 "zone_management": false, 00:07:24.643 "zone_append": false, 00:07:24.643 "compare": false, 00:07:24.643 "compare_and_write": false, 00:07:24.643 "abort": true, 00:07:24.643 "seek_hole": false, 00:07:24.643 "seek_data": false, 00:07:24.643 "copy": true, 00:07:24.643 "nvme_iov_md": false 00:07:24.643 }, 00:07:24.643 "memory_domains": [ 00:07:24.643 { 00:07:24.643 "dma_device_id": "system", 00:07:24.643 "dma_device_type": 1 00:07:24.643 }, 00:07:24.643 { 00:07:24.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.643 "dma_device_type": 2 00:07:24.643 } 00:07:24.643 ], 00:07:24.643 "driver_specific": {} 00:07:24.643 } 00:07:24.643 ] 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.643 "name": "Existed_Raid", 00:07:24.643 "uuid": "d4e03c74-b133-42ef-9e5e-042d839b6afc", 00:07:24.643 "strip_size_kb": 0, 00:07:24.643 "state": "online", 00:07:24.643 "raid_level": "raid1", 00:07:24.643 "superblock": true, 00:07:24.643 "num_base_bdevs": 2, 00:07:24.643 "num_base_bdevs_discovered": 2, 00:07:24.643 "num_base_bdevs_operational": 2, 00:07:24.643 "base_bdevs_list": [ 00:07:24.643 { 00:07:24.643 "name": "BaseBdev1", 00:07:24.643 "uuid": "e618c6f8-9552-4ce9-93bd-478113e8abe3", 00:07:24.643 "is_configured": true, 00:07:24.643 "data_offset": 2048, 00:07:24.643 "data_size": 63488 00:07:24.643 }, 00:07:24.643 { 00:07:24.643 "name": "BaseBdev2", 00:07:24.643 "uuid": "2d8d53de-1ad9-4597-beb5-1ae08a74f997", 00:07:24.643 "is_configured": true, 00:07:24.643 "data_offset": 2048, 00:07:24.643 "data_size": 63488 00:07:24.643 } 00:07:24.643 ] 00:07:24.643 }' 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.643 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.211 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:25.211 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:25.211 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:25.211 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:25.211 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:25.211 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:25.211 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:25.211 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:25.211 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.211 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.211 [2024-12-14 04:56:35.906186] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.211 04:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.211 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:25.211 "name": "Existed_Raid", 00:07:25.211 "aliases": [ 00:07:25.211 "d4e03c74-b133-42ef-9e5e-042d839b6afc" 00:07:25.211 ], 00:07:25.211 "product_name": "Raid Volume", 00:07:25.211 "block_size": 512, 00:07:25.211 "num_blocks": 63488, 00:07:25.211 "uuid": "d4e03c74-b133-42ef-9e5e-042d839b6afc", 00:07:25.211 "assigned_rate_limits": { 00:07:25.211 "rw_ios_per_sec": 0, 00:07:25.211 "rw_mbytes_per_sec": 0, 00:07:25.211 "r_mbytes_per_sec": 0, 00:07:25.211 "w_mbytes_per_sec": 0 00:07:25.211 }, 00:07:25.211 "claimed": false, 00:07:25.211 "zoned": false, 00:07:25.211 "supported_io_types": { 00:07:25.211 "read": true, 00:07:25.211 "write": true, 00:07:25.211 "unmap": false, 00:07:25.211 "flush": false, 00:07:25.211 "reset": true, 00:07:25.211 "nvme_admin": false, 00:07:25.211 "nvme_io": false, 00:07:25.211 "nvme_io_md": false, 00:07:25.211 "write_zeroes": true, 00:07:25.211 "zcopy": false, 00:07:25.211 "get_zone_info": false, 00:07:25.211 "zone_management": false, 00:07:25.211 "zone_append": false, 00:07:25.211 "compare": false, 00:07:25.211 "compare_and_write": false, 00:07:25.211 "abort": false, 00:07:25.211 "seek_hole": false, 00:07:25.211 "seek_data": false, 00:07:25.211 "copy": false, 00:07:25.211 "nvme_iov_md": false 00:07:25.211 }, 00:07:25.211 "memory_domains": [ 00:07:25.211 { 00:07:25.211 "dma_device_id": "system", 00:07:25.211 "dma_device_type": 1 00:07:25.211 }, 00:07:25.211 { 00:07:25.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.211 "dma_device_type": 2 00:07:25.211 }, 00:07:25.211 { 00:07:25.211 "dma_device_id": "system", 00:07:25.211 "dma_device_type": 1 00:07:25.211 }, 00:07:25.211 { 00:07:25.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.211 "dma_device_type": 2 00:07:25.211 } 00:07:25.211 ], 00:07:25.211 "driver_specific": { 00:07:25.211 "raid": { 00:07:25.211 "uuid": "d4e03c74-b133-42ef-9e5e-042d839b6afc", 00:07:25.211 "strip_size_kb": 0, 00:07:25.211 "state": "online", 00:07:25.211 "raid_level": "raid1", 00:07:25.211 "superblock": true, 00:07:25.211 "num_base_bdevs": 2, 00:07:25.211 "num_base_bdevs_discovered": 2, 00:07:25.211 "num_base_bdevs_operational": 2, 00:07:25.211 "base_bdevs_list": [ 00:07:25.211 { 00:07:25.211 "name": "BaseBdev1", 00:07:25.211 "uuid": "e618c6f8-9552-4ce9-93bd-478113e8abe3", 00:07:25.211 "is_configured": true, 00:07:25.211 "data_offset": 2048, 00:07:25.211 "data_size": 63488 00:07:25.211 }, 00:07:25.211 { 00:07:25.211 "name": "BaseBdev2", 00:07:25.211 "uuid": "2d8d53de-1ad9-4597-beb5-1ae08a74f997", 00:07:25.211 "is_configured": true, 00:07:25.211 "data_offset": 2048, 00:07:25.211 "data_size": 63488 00:07:25.211 } 00:07:25.211 ] 00:07:25.211 } 00:07:25.211 } 00:07:25.211 }' 00:07:25.211 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:25.211 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:25.211 BaseBdev2' 00:07:25.211 04:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.211 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:25.211 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:25.211 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:25.211 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.211 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.211 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.211 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.211 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:25.211 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:25.211 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:25.211 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:25.211 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.211 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.211 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.471 [2024-12-14 04:56:36.109584] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.471 "name": "Existed_Raid", 00:07:25.471 "uuid": "d4e03c74-b133-42ef-9e5e-042d839b6afc", 00:07:25.471 "strip_size_kb": 0, 00:07:25.471 "state": "online", 00:07:25.471 "raid_level": "raid1", 00:07:25.471 "superblock": true, 00:07:25.471 "num_base_bdevs": 2, 00:07:25.471 "num_base_bdevs_discovered": 1, 00:07:25.471 "num_base_bdevs_operational": 1, 00:07:25.471 "base_bdevs_list": [ 00:07:25.471 { 00:07:25.471 "name": null, 00:07:25.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.471 "is_configured": false, 00:07:25.471 "data_offset": 0, 00:07:25.471 "data_size": 63488 00:07:25.471 }, 00:07:25.471 { 00:07:25.471 "name": "BaseBdev2", 00:07:25.471 "uuid": "2d8d53de-1ad9-4597-beb5-1ae08a74f997", 00:07:25.471 "is_configured": true, 00:07:25.471 "data_offset": 2048, 00:07:25.471 "data_size": 63488 00:07:25.471 } 00:07:25.471 ] 00:07:25.471 }' 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.471 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.731 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:25.731 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:25.731 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.731 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.731 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.731 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:25.731 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.731 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:25.731 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:25.731 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:25.731 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.731 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.731 [2024-12-14 04:56:36.595998] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:25.731 [2024-12-14 04:56:36.596141] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:25.731 [2024-12-14 04:56:36.607613] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:25.731 [2024-12-14 04:56:36.607737] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:25.731 [2024-12-14 04:56:36.607788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:25.731 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.731 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:25.731 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:25.991 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.991 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:25.991 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.991 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.991 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.991 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:25.991 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:25.991 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:25.991 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74253 00:07:25.991 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 74253 ']' 00:07:25.991 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 74253 00:07:25.991 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:25.991 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.991 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74253 00:07:25.991 killing process with pid 74253 00:07:25.991 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:25.991 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:25.991 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74253' 00:07:25.991 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 74253 00:07:25.991 [2024-12-14 04:56:36.692878] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:25.991 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 74253 00:07:25.991 [2024-12-14 04:56:36.693830] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:26.251 04:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:26.251 00:07:26.251 real 0m3.785s 00:07:26.251 user 0m5.911s 00:07:26.251 sys 0m0.753s 00:07:26.251 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.251 ************************************ 00:07:26.251 END TEST raid_state_function_test_sb 00:07:26.251 ************************************ 00:07:26.251 04:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.251 04:56:36 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:26.251 04:56:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:26.251 04:56:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.251 04:56:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:26.251 ************************************ 00:07:26.251 START TEST raid_superblock_test 00:07:26.251 ************************************ 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74489 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74489 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74489 ']' 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.251 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.251 [2024-12-14 04:56:37.091868] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:26.251 [2024-12-14 04:56:37.092078] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74489 ] 00:07:26.510 [2024-12-14 04:56:37.245287] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.510 [2024-12-14 04:56:37.292372] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.510 [2024-12-14 04:56:37.334733] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.510 [2024-12-14 04:56:37.334773] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.080 malloc1 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.080 [2024-12-14 04:56:37.932591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:27.080 [2024-12-14 04:56:37.932721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.080 [2024-12-14 04:56:37.932760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:27.080 [2024-12-14 04:56:37.932821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.080 [2024-12-14 04:56:37.934854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.080 [2024-12-14 04:56:37.934926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:27.080 pt1 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.080 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.340 malloc2 00:07:27.340 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.340 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:27.340 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.340 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.340 [2024-12-14 04:56:37.976461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:27.340 [2024-12-14 04:56:37.976673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.340 [2024-12-14 04:56:37.976723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:27.340 [2024-12-14 04:56:37.976751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.340 [2024-12-14 04:56:37.981848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.340 [2024-12-14 04:56:37.981931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:27.340 pt2 00:07:27.340 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.340 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:27.340 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:27.340 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:27.340 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.340 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.340 [2024-12-14 04:56:37.990235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:27.340 [2024-12-14 04:56:37.993341] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:27.340 [2024-12-14 04:56:37.993659] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:27.340 [2024-12-14 04:56:37.993690] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:27.340 [2024-12-14 04:56:37.993995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:27.340 [2024-12-14 04:56:37.994149] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:27.340 [2024-12-14 04:56:37.994162] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:27.340 [2024-12-14 04:56:37.994366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.340 04:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.340 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:27.340 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:27.340 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.340 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:27.340 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:27.340 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.340 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.340 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.340 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.340 04:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.340 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.340 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:27.340 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.340 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.340 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.340 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.340 "name": "raid_bdev1", 00:07:27.340 "uuid": "df89aa2d-6340-447e-b0a5-389add1bb359", 00:07:27.340 "strip_size_kb": 0, 00:07:27.340 "state": "online", 00:07:27.340 "raid_level": "raid1", 00:07:27.340 "superblock": true, 00:07:27.340 "num_base_bdevs": 2, 00:07:27.340 "num_base_bdevs_discovered": 2, 00:07:27.340 "num_base_bdevs_operational": 2, 00:07:27.340 "base_bdevs_list": [ 00:07:27.340 { 00:07:27.340 "name": "pt1", 00:07:27.340 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:27.340 "is_configured": true, 00:07:27.340 "data_offset": 2048, 00:07:27.340 "data_size": 63488 00:07:27.340 }, 00:07:27.340 { 00:07:27.340 "name": "pt2", 00:07:27.340 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:27.340 "is_configured": true, 00:07:27.340 "data_offset": 2048, 00:07:27.340 "data_size": 63488 00:07:27.340 } 00:07:27.340 ] 00:07:27.340 }' 00:07:27.340 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.340 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.600 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:27.600 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:27.600 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:27.600 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:27.600 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:27.600 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:27.600 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:27.600 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:27.600 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.600 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.600 [2024-12-14 04:56:38.393922] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.600 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.600 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:27.600 "name": "raid_bdev1", 00:07:27.600 "aliases": [ 00:07:27.600 "df89aa2d-6340-447e-b0a5-389add1bb359" 00:07:27.600 ], 00:07:27.600 "product_name": "Raid Volume", 00:07:27.600 "block_size": 512, 00:07:27.600 "num_blocks": 63488, 00:07:27.600 "uuid": "df89aa2d-6340-447e-b0a5-389add1bb359", 00:07:27.600 "assigned_rate_limits": { 00:07:27.600 "rw_ios_per_sec": 0, 00:07:27.600 "rw_mbytes_per_sec": 0, 00:07:27.600 "r_mbytes_per_sec": 0, 00:07:27.600 "w_mbytes_per_sec": 0 00:07:27.600 }, 00:07:27.600 "claimed": false, 00:07:27.600 "zoned": false, 00:07:27.600 "supported_io_types": { 00:07:27.600 "read": true, 00:07:27.600 "write": true, 00:07:27.600 "unmap": false, 00:07:27.600 "flush": false, 00:07:27.600 "reset": true, 00:07:27.600 "nvme_admin": false, 00:07:27.600 "nvme_io": false, 00:07:27.600 "nvme_io_md": false, 00:07:27.600 "write_zeroes": true, 00:07:27.600 "zcopy": false, 00:07:27.600 "get_zone_info": false, 00:07:27.600 "zone_management": false, 00:07:27.600 "zone_append": false, 00:07:27.600 "compare": false, 00:07:27.600 "compare_and_write": false, 00:07:27.600 "abort": false, 00:07:27.600 "seek_hole": false, 00:07:27.600 "seek_data": false, 00:07:27.600 "copy": false, 00:07:27.600 "nvme_iov_md": false 00:07:27.600 }, 00:07:27.600 "memory_domains": [ 00:07:27.600 { 00:07:27.600 "dma_device_id": "system", 00:07:27.600 "dma_device_type": 1 00:07:27.600 }, 00:07:27.600 { 00:07:27.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.600 "dma_device_type": 2 00:07:27.600 }, 00:07:27.600 { 00:07:27.600 "dma_device_id": "system", 00:07:27.600 "dma_device_type": 1 00:07:27.600 }, 00:07:27.600 { 00:07:27.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.600 "dma_device_type": 2 00:07:27.600 } 00:07:27.600 ], 00:07:27.600 "driver_specific": { 00:07:27.600 "raid": { 00:07:27.600 "uuid": "df89aa2d-6340-447e-b0a5-389add1bb359", 00:07:27.600 "strip_size_kb": 0, 00:07:27.600 "state": "online", 00:07:27.600 "raid_level": "raid1", 00:07:27.600 "superblock": true, 00:07:27.600 "num_base_bdevs": 2, 00:07:27.600 "num_base_bdevs_discovered": 2, 00:07:27.600 "num_base_bdevs_operational": 2, 00:07:27.600 "base_bdevs_list": [ 00:07:27.600 { 00:07:27.600 "name": "pt1", 00:07:27.601 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:27.601 "is_configured": true, 00:07:27.601 "data_offset": 2048, 00:07:27.601 "data_size": 63488 00:07:27.601 }, 00:07:27.601 { 00:07:27.601 "name": "pt2", 00:07:27.601 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:27.601 "is_configured": true, 00:07:27.601 "data_offset": 2048, 00:07:27.601 "data_size": 63488 00:07:27.601 } 00:07:27.601 ] 00:07:27.601 } 00:07:27.601 } 00:07:27.601 }' 00:07:27.601 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:27.860 pt2' 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:27.860 [2024-12-14 04:56:38.637424] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=df89aa2d-6340-447e-b0a5-389add1bb359 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z df89aa2d-6340-447e-b0a5-389add1bb359 ']' 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 [2024-12-14 04:56:38.689107] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:27.860 [2024-12-14 04:56:38.689132] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:27.860 [2024-12-14 04:56:38.689214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.860 [2024-12-14 04:56:38.689292] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.860 [2024-12-14 04:56:38.689301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:27.860 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.120 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:28.120 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:28.120 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:28.120 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:28.120 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.120 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.120 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.120 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:28.120 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:28.120 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.120 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.120 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.120 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:28.120 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:28.120 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.120 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.120 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.120 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.121 [2024-12-14 04:56:38.824902] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:28.121 [2024-12-14 04:56:38.826700] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:28.121 [2024-12-14 04:56:38.826770] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:28.121 [2024-12-14 04:56:38.826819] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:28.121 [2024-12-14 04:56:38.826837] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:28.121 [2024-12-14 04:56:38.826845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:28.121 request: 00:07:28.121 { 00:07:28.121 "name": "raid_bdev1", 00:07:28.121 "raid_level": "raid1", 00:07:28.121 "base_bdevs": [ 00:07:28.121 "malloc1", 00:07:28.121 "malloc2" 00:07:28.121 ], 00:07:28.121 "superblock": false, 00:07:28.121 "method": "bdev_raid_create", 00:07:28.121 "req_id": 1 00:07:28.121 } 00:07:28.121 Got JSON-RPC error response 00:07:28.121 response: 00:07:28.121 { 00:07:28.121 "code": -17, 00:07:28.121 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:28.121 } 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.121 [2024-12-14 04:56:38.892770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:28.121 [2024-12-14 04:56:38.892867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.121 [2024-12-14 04:56:38.892900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:28.121 [2024-12-14 04:56:38.892926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.121 [2024-12-14 04:56:38.894935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.121 [2024-12-14 04:56:38.895015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:28.121 [2024-12-14 04:56:38.895100] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:28.121 [2024-12-14 04:56:38.895171] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:28.121 pt1 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.121 "name": "raid_bdev1", 00:07:28.121 "uuid": "df89aa2d-6340-447e-b0a5-389add1bb359", 00:07:28.121 "strip_size_kb": 0, 00:07:28.121 "state": "configuring", 00:07:28.121 "raid_level": "raid1", 00:07:28.121 "superblock": true, 00:07:28.121 "num_base_bdevs": 2, 00:07:28.121 "num_base_bdevs_discovered": 1, 00:07:28.121 "num_base_bdevs_operational": 2, 00:07:28.121 "base_bdevs_list": [ 00:07:28.121 { 00:07:28.121 "name": "pt1", 00:07:28.121 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:28.121 "is_configured": true, 00:07:28.121 "data_offset": 2048, 00:07:28.121 "data_size": 63488 00:07:28.121 }, 00:07:28.121 { 00:07:28.121 "name": null, 00:07:28.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:28.121 "is_configured": false, 00:07:28.121 "data_offset": 2048, 00:07:28.121 "data_size": 63488 00:07:28.121 } 00:07:28.121 ] 00:07:28.121 }' 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.121 04:56:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.691 [2024-12-14 04:56:39.324029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:28.691 [2024-12-14 04:56:39.324125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.691 [2024-12-14 04:56:39.324171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:28.691 [2024-12-14 04:56:39.324200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.691 [2024-12-14 04:56:39.324580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.691 [2024-12-14 04:56:39.324632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:28.691 [2024-12-14 04:56:39.324719] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:28.691 [2024-12-14 04:56:39.324764] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:28.691 [2024-12-14 04:56:39.324864] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:28.691 [2024-12-14 04:56:39.324898] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:28.691 [2024-12-14 04:56:39.325132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:28.691 [2024-12-14 04:56:39.325267] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:28.691 [2024-12-14 04:56:39.325283] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:28.691 [2024-12-14 04:56:39.325380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.691 pt2 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.691 "name": "raid_bdev1", 00:07:28.691 "uuid": "df89aa2d-6340-447e-b0a5-389add1bb359", 00:07:28.691 "strip_size_kb": 0, 00:07:28.691 "state": "online", 00:07:28.691 "raid_level": "raid1", 00:07:28.691 "superblock": true, 00:07:28.691 "num_base_bdevs": 2, 00:07:28.691 "num_base_bdevs_discovered": 2, 00:07:28.691 "num_base_bdevs_operational": 2, 00:07:28.691 "base_bdevs_list": [ 00:07:28.691 { 00:07:28.691 "name": "pt1", 00:07:28.691 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:28.691 "is_configured": true, 00:07:28.691 "data_offset": 2048, 00:07:28.691 "data_size": 63488 00:07:28.691 }, 00:07:28.691 { 00:07:28.691 "name": "pt2", 00:07:28.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:28.691 "is_configured": true, 00:07:28.691 "data_offset": 2048, 00:07:28.691 "data_size": 63488 00:07:28.691 } 00:07:28.691 ] 00:07:28.691 }' 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.691 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.951 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:28.951 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:28.951 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:28.951 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:28.951 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:28.951 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:28.951 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:28.951 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:28.951 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.951 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.951 [2024-12-14 04:56:39.743550] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.951 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.951 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:28.951 "name": "raid_bdev1", 00:07:28.951 "aliases": [ 00:07:28.951 "df89aa2d-6340-447e-b0a5-389add1bb359" 00:07:28.951 ], 00:07:28.951 "product_name": "Raid Volume", 00:07:28.951 "block_size": 512, 00:07:28.951 "num_blocks": 63488, 00:07:28.951 "uuid": "df89aa2d-6340-447e-b0a5-389add1bb359", 00:07:28.951 "assigned_rate_limits": { 00:07:28.951 "rw_ios_per_sec": 0, 00:07:28.951 "rw_mbytes_per_sec": 0, 00:07:28.951 "r_mbytes_per_sec": 0, 00:07:28.951 "w_mbytes_per_sec": 0 00:07:28.951 }, 00:07:28.951 "claimed": false, 00:07:28.951 "zoned": false, 00:07:28.951 "supported_io_types": { 00:07:28.951 "read": true, 00:07:28.951 "write": true, 00:07:28.951 "unmap": false, 00:07:28.951 "flush": false, 00:07:28.951 "reset": true, 00:07:28.951 "nvme_admin": false, 00:07:28.951 "nvme_io": false, 00:07:28.951 "nvme_io_md": false, 00:07:28.951 "write_zeroes": true, 00:07:28.951 "zcopy": false, 00:07:28.951 "get_zone_info": false, 00:07:28.951 "zone_management": false, 00:07:28.951 "zone_append": false, 00:07:28.951 "compare": false, 00:07:28.951 "compare_and_write": false, 00:07:28.951 "abort": false, 00:07:28.951 "seek_hole": false, 00:07:28.951 "seek_data": false, 00:07:28.951 "copy": false, 00:07:28.951 "nvme_iov_md": false 00:07:28.951 }, 00:07:28.951 "memory_domains": [ 00:07:28.951 { 00:07:28.951 "dma_device_id": "system", 00:07:28.951 "dma_device_type": 1 00:07:28.951 }, 00:07:28.951 { 00:07:28.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.951 "dma_device_type": 2 00:07:28.951 }, 00:07:28.951 { 00:07:28.951 "dma_device_id": "system", 00:07:28.951 "dma_device_type": 1 00:07:28.951 }, 00:07:28.951 { 00:07:28.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.951 "dma_device_type": 2 00:07:28.951 } 00:07:28.951 ], 00:07:28.951 "driver_specific": { 00:07:28.951 "raid": { 00:07:28.951 "uuid": "df89aa2d-6340-447e-b0a5-389add1bb359", 00:07:28.951 "strip_size_kb": 0, 00:07:28.951 "state": "online", 00:07:28.951 "raid_level": "raid1", 00:07:28.951 "superblock": true, 00:07:28.951 "num_base_bdevs": 2, 00:07:28.951 "num_base_bdevs_discovered": 2, 00:07:28.951 "num_base_bdevs_operational": 2, 00:07:28.951 "base_bdevs_list": [ 00:07:28.951 { 00:07:28.951 "name": "pt1", 00:07:28.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:28.951 "is_configured": true, 00:07:28.951 "data_offset": 2048, 00:07:28.951 "data_size": 63488 00:07:28.951 }, 00:07:28.951 { 00:07:28.951 "name": "pt2", 00:07:28.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:28.951 "is_configured": true, 00:07:28.951 "data_offset": 2048, 00:07:28.951 "data_size": 63488 00:07:28.951 } 00:07:28.951 ] 00:07:28.951 } 00:07:28.951 } 00:07:28.951 }' 00:07:28.951 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:28.951 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:28.951 pt2' 00:07:28.951 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:29.211 [2024-12-14 04:56:39.935458] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' df89aa2d-6340-447e-b0a5-389add1bb359 '!=' df89aa2d-6340-447e-b0a5-389add1bb359 ']' 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.211 [2024-12-14 04:56:39.979176] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.211 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.212 04:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.212 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.212 04:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.212 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.212 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.212 "name": "raid_bdev1", 00:07:29.212 "uuid": "df89aa2d-6340-447e-b0a5-389add1bb359", 00:07:29.212 "strip_size_kb": 0, 00:07:29.212 "state": "online", 00:07:29.212 "raid_level": "raid1", 00:07:29.212 "superblock": true, 00:07:29.212 "num_base_bdevs": 2, 00:07:29.212 "num_base_bdevs_discovered": 1, 00:07:29.212 "num_base_bdevs_operational": 1, 00:07:29.212 "base_bdevs_list": [ 00:07:29.212 { 00:07:29.212 "name": null, 00:07:29.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.212 "is_configured": false, 00:07:29.212 "data_offset": 0, 00:07:29.212 "data_size": 63488 00:07:29.212 }, 00:07:29.212 { 00:07:29.212 "name": "pt2", 00:07:29.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:29.212 "is_configured": true, 00:07:29.212 "data_offset": 2048, 00:07:29.212 "data_size": 63488 00:07:29.212 } 00:07:29.212 ] 00:07:29.212 }' 00:07:29.212 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.212 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.781 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:29.781 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.781 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.781 [2024-12-14 04:56:40.430338] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:29.781 [2024-12-14 04:56:40.430410] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:29.781 [2024-12-14 04:56:40.430502] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.781 [2024-12-14 04:56:40.430580] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:29.781 [2024-12-14 04:56:40.430659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:29.781 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.781 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.781 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.782 [2024-12-14 04:56:40.502231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:29.782 [2024-12-14 04:56:40.502276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.782 [2024-12-14 04:56:40.502309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:29.782 [2024-12-14 04:56:40.502318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.782 [2024-12-14 04:56:40.504481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.782 [2024-12-14 04:56:40.504549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:29.782 [2024-12-14 04:56:40.504645] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:29.782 [2024-12-14 04:56:40.504695] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:29.782 [2024-12-14 04:56:40.504787] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:07:29.782 [2024-12-14 04:56:40.504836] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:29.782 [2024-12-14 04:56:40.505075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:29.782 [2024-12-14 04:56:40.505242] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:07:29.782 [2024-12-14 04:56:40.505287] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:07:29.782 [2024-12-14 04:56:40.505426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.782 pt2 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.782 "name": "raid_bdev1", 00:07:29.782 "uuid": "df89aa2d-6340-447e-b0a5-389add1bb359", 00:07:29.782 "strip_size_kb": 0, 00:07:29.782 "state": "online", 00:07:29.782 "raid_level": "raid1", 00:07:29.782 "superblock": true, 00:07:29.782 "num_base_bdevs": 2, 00:07:29.782 "num_base_bdevs_discovered": 1, 00:07:29.782 "num_base_bdevs_operational": 1, 00:07:29.782 "base_bdevs_list": [ 00:07:29.782 { 00:07:29.782 "name": null, 00:07:29.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.782 "is_configured": false, 00:07:29.782 "data_offset": 2048, 00:07:29.782 "data_size": 63488 00:07:29.782 }, 00:07:29.782 { 00:07:29.782 "name": "pt2", 00:07:29.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:29.782 "is_configured": true, 00:07:29.782 "data_offset": 2048, 00:07:29.782 "data_size": 63488 00:07:29.782 } 00:07:29.782 ] 00:07:29.782 }' 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.782 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.352 [2024-12-14 04:56:40.941489] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:30.352 [2024-12-14 04:56:40.941513] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:30.352 [2024-12-14 04:56:40.941576] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.352 [2024-12-14 04:56:40.941619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:30.352 [2024-12-14 04:56:40.941630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.352 [2024-12-14 04:56:40.985378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:30.352 [2024-12-14 04:56:40.985466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.352 [2024-12-14 04:56:40.985502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:30.352 [2024-12-14 04:56:40.985537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.352 [2024-12-14 04:56:40.987598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.352 [2024-12-14 04:56:40.987670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:30.352 [2024-12-14 04:56:40.987753] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:30.352 [2024-12-14 04:56:40.987826] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:30.352 [2024-12-14 04:56:40.987992] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:30.352 [2024-12-14 04:56:40.988050] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:30.352 [2024-12-14 04:56:40.988085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:07:30.352 [2024-12-14 04:56:40.988170] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:30.352 [2024-12-14 04:56:40.988274] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:30.352 [2024-12-14 04:56:40.988314] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:30.352 [2024-12-14 04:56:40.988571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:30.352 [2024-12-14 04:56:40.988718] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:30.352 [2024-12-14 04:56:40.988756] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:07:30.352 [2024-12-14 04:56:40.988896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.352 pt1 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.352 04:56:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.352 04:56:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.352 04:56:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.352 "name": "raid_bdev1", 00:07:30.352 "uuid": "df89aa2d-6340-447e-b0a5-389add1bb359", 00:07:30.352 "strip_size_kb": 0, 00:07:30.352 "state": "online", 00:07:30.352 "raid_level": "raid1", 00:07:30.352 "superblock": true, 00:07:30.352 "num_base_bdevs": 2, 00:07:30.352 "num_base_bdevs_discovered": 1, 00:07:30.352 "num_base_bdevs_operational": 1, 00:07:30.352 "base_bdevs_list": [ 00:07:30.352 { 00:07:30.352 "name": null, 00:07:30.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.352 "is_configured": false, 00:07:30.352 "data_offset": 2048, 00:07:30.352 "data_size": 63488 00:07:30.352 }, 00:07:30.352 { 00:07:30.352 "name": "pt2", 00:07:30.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:30.352 "is_configured": true, 00:07:30.352 "data_offset": 2048, 00:07:30.352 "data_size": 63488 00:07:30.352 } 00:07:30.352 ] 00:07:30.352 }' 00:07:30.352 04:56:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.352 04:56:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.612 04:56:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:30.612 04:56:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.612 04:56:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.612 04:56:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:30.612 04:56:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.612 04:56:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:30.612 04:56:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:30.612 04:56:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:30.612 04:56:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.612 04:56:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.612 [2024-12-14 04:56:41.468778] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.612 04:56:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.873 04:56:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' df89aa2d-6340-447e-b0a5-389add1bb359 '!=' df89aa2d-6340-447e-b0a5-389add1bb359 ']' 00:07:30.873 04:56:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74489 00:07:30.873 04:56:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74489 ']' 00:07:30.873 04:56:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74489 00:07:30.873 04:56:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:30.873 04:56:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.873 04:56:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74489 00:07:30.873 04:56:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.873 04:56:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.873 killing process with pid 74489 00:07:30.873 04:56:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74489' 00:07:30.873 04:56:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74489 00:07:30.873 [2024-12-14 04:56:41.550950] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.873 [2024-12-14 04:56:41.551022] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.873 [2024-12-14 04:56:41.551067] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:30.873 [2024-12-14 04:56:41.551075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:07:30.873 04:56:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74489 00:07:30.873 [2024-12-14 04:56:41.574117] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:31.132 04:56:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:31.132 00:07:31.132 real 0m4.809s 00:07:31.132 user 0m7.847s 00:07:31.132 sys 0m0.936s 00:07:31.132 04:56:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.132 ************************************ 00:07:31.132 END TEST raid_superblock_test 00:07:31.132 ************************************ 00:07:31.132 04:56:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.132 04:56:41 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:31.132 04:56:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:31.132 04:56:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.132 04:56:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.132 ************************************ 00:07:31.132 START TEST raid_read_error_test 00:07:31.132 ************************************ 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Ez8eE7v3fw 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74808 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74808 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 74808 ']' 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.132 04:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.392 [2024-12-14 04:56:42.014122] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:31.392 [2024-12-14 04:56:42.014326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74808 ] 00:07:31.392 [2024-12-14 04:56:42.164768] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.392 [2024-12-14 04:56:42.211206] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.392 [2024-12-14 04:56:42.252808] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.392 [2024-12-14 04:56:42.252844] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.364 BaseBdev1_malloc 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.364 true 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.364 [2024-12-14 04:56:42.915054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:32.364 [2024-12-14 04:56:42.915109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.364 [2024-12-14 04:56:42.915150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:32.364 [2024-12-14 04:56:42.915198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.364 [2024-12-14 04:56:42.917277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.364 [2024-12-14 04:56:42.917311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:32.364 BaseBdev1 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.364 BaseBdev2_malloc 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.364 true 00:07:32.364 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.365 [2024-12-14 04:56:42.963455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:32.365 [2024-12-14 04:56:42.963546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.365 [2024-12-14 04:56:42.963582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:32.365 [2024-12-14 04:56:42.963616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.365 [2024-12-14 04:56:42.965682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.365 [2024-12-14 04:56:42.965766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:32.365 BaseBdev2 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.365 [2024-12-14 04:56:42.975452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:32.365 [2024-12-14 04:56:42.977313] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:32.365 [2024-12-14 04:56:42.977532] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:32.365 [2024-12-14 04:56:42.977583] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:32.365 [2024-12-14 04:56:42.977864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:32.365 [2024-12-14 04:56:42.978054] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:32.365 [2024-12-14 04:56:42.978103] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:32.365 [2024-12-14 04:56:42.978283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:32.365 04:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.365 04:56:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.365 "name": "raid_bdev1", 00:07:32.365 "uuid": "606492a3-3731-40da-99a9-d7721437fc03", 00:07:32.365 "strip_size_kb": 0, 00:07:32.365 "state": "online", 00:07:32.365 "raid_level": "raid1", 00:07:32.365 "superblock": true, 00:07:32.365 "num_base_bdevs": 2, 00:07:32.365 "num_base_bdevs_discovered": 2, 00:07:32.365 "num_base_bdevs_operational": 2, 00:07:32.365 "base_bdevs_list": [ 00:07:32.365 { 00:07:32.365 "name": "BaseBdev1", 00:07:32.365 "uuid": "594db969-f23a-530e-9516-e6d4a27e4f90", 00:07:32.365 "is_configured": true, 00:07:32.365 "data_offset": 2048, 00:07:32.365 "data_size": 63488 00:07:32.365 }, 00:07:32.365 { 00:07:32.365 "name": "BaseBdev2", 00:07:32.365 "uuid": "92c50f2a-1697-5880-9f1a-820822491323", 00:07:32.365 "is_configured": true, 00:07:32.365 "data_offset": 2048, 00:07:32.365 "data_size": 63488 00:07:32.365 } 00:07:32.365 ] 00:07:32.365 }' 00:07:32.365 04:56:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.365 04:56:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.640 04:56:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:32.640 04:56:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:32.640 [2024-12-14 04:56:43.479129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.579 "name": "raid_bdev1", 00:07:33.579 "uuid": "606492a3-3731-40da-99a9-d7721437fc03", 00:07:33.579 "strip_size_kb": 0, 00:07:33.579 "state": "online", 00:07:33.579 "raid_level": "raid1", 00:07:33.579 "superblock": true, 00:07:33.579 "num_base_bdevs": 2, 00:07:33.579 "num_base_bdevs_discovered": 2, 00:07:33.579 "num_base_bdevs_operational": 2, 00:07:33.579 "base_bdevs_list": [ 00:07:33.579 { 00:07:33.579 "name": "BaseBdev1", 00:07:33.579 "uuid": "594db969-f23a-530e-9516-e6d4a27e4f90", 00:07:33.579 "is_configured": true, 00:07:33.579 "data_offset": 2048, 00:07:33.579 "data_size": 63488 00:07:33.579 }, 00:07:33.579 { 00:07:33.579 "name": "BaseBdev2", 00:07:33.579 "uuid": "92c50f2a-1697-5880-9f1a-820822491323", 00:07:33.579 "is_configured": true, 00:07:33.579 "data_offset": 2048, 00:07:33.579 "data_size": 63488 00:07:33.579 } 00:07:33.579 ] 00:07:33.579 }' 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.579 04:56:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.147 04:56:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:34.147 04:56:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.147 04:56:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.147 [2024-12-14 04:56:44.834483] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:34.147 [2024-12-14 04:56:44.834516] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:34.147 [2024-12-14 04:56:44.837063] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.147 [2024-12-14 04:56:44.837106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.147 [2024-12-14 04:56:44.837198] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:34.147 [2024-12-14 04:56:44.837219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:34.147 { 00:07:34.147 "results": [ 00:07:34.147 { 00:07:34.147 "job": "raid_bdev1", 00:07:34.147 "core_mask": "0x1", 00:07:34.147 "workload": "randrw", 00:07:34.147 "percentage": 50, 00:07:34.147 "status": "finished", 00:07:34.147 "queue_depth": 1, 00:07:34.147 "io_size": 131072, 00:07:34.147 "runtime": 1.356158, 00:07:34.147 "iops": 20367.095869360353, 00:07:34.147 "mibps": 2545.886983670044, 00:07:34.147 "io_failed": 0, 00:07:34.147 "io_timeout": 0, 00:07:34.147 "avg_latency_us": 46.67807435295814, 00:07:34.147 "min_latency_us": 21.351965065502185, 00:07:34.147 "max_latency_us": 1387.989519650655 00:07:34.147 } 00:07:34.147 ], 00:07:34.147 "core_count": 1 00:07:34.147 } 00:07:34.147 04:56:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.147 04:56:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74808 00:07:34.147 04:56:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 74808 ']' 00:07:34.147 04:56:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 74808 00:07:34.147 04:56:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:34.147 04:56:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.147 04:56:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74808 00:07:34.147 killing process with pid 74808 00:07:34.147 04:56:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.147 04:56:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.147 04:56:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74808' 00:07:34.147 04:56:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 74808 00:07:34.147 [2024-12-14 04:56:44.879954] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.147 04:56:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 74808 00:07:34.148 [2024-12-14 04:56:44.895779] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:34.409 04:56:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Ez8eE7v3fw 00:07:34.409 04:56:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:34.409 04:56:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:34.409 04:56:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:34.409 04:56:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:34.409 04:56:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:34.409 04:56:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:34.409 ************************************ 00:07:34.409 END TEST raid_read_error_test 00:07:34.410 ************************************ 00:07:34.410 04:56:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:34.410 00:07:34.410 real 0m3.254s 00:07:34.410 user 0m4.136s 00:07:34.410 sys 0m0.501s 00:07:34.410 04:56:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.410 04:56:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.410 04:56:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:07:34.410 04:56:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:34.410 04:56:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.410 04:56:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:34.410 ************************************ 00:07:34.410 START TEST raid_write_error_test 00:07:34.410 ************************************ 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lxeosimBrH 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74937 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74937 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 74937 ']' 00:07:34.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.410 04:56:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.671 [2024-12-14 04:56:45.316437] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:34.671 [2024-12-14 04:56:45.316597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74937 ] 00:07:34.671 [2024-12-14 04:56:45.477186] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.671 [2024-12-14 04:56:45.522206] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.930 [2024-12-14 04:56:45.564202] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.930 [2024-12-14 04:56:45.564239] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.498 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.498 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:35.498 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.499 BaseBdev1_malloc 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.499 true 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.499 [2024-12-14 04:56:46.158334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:35.499 [2024-12-14 04:56:46.158384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.499 [2024-12-14 04:56:46.158430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:35.499 [2024-12-14 04:56:46.158439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.499 [2024-12-14 04:56:46.160504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.499 [2024-12-14 04:56:46.160546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:35.499 BaseBdev1 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.499 BaseBdev2_malloc 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.499 true 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.499 [2024-12-14 04:56:46.220292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:35.499 [2024-12-14 04:56:46.220366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.499 [2024-12-14 04:56:46.220398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:35.499 [2024-12-14 04:56:46.220414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.499 [2024-12-14 04:56:46.222958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.499 [2024-12-14 04:56:46.223000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:35.499 BaseBdev2 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.499 [2024-12-14 04:56:46.232222] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:35.499 [2024-12-14 04:56:46.234012] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:35.499 [2024-12-14 04:56:46.234197] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:35.499 [2024-12-14 04:56:46.234211] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:35.499 [2024-12-14 04:56:46.234449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:35.499 [2024-12-14 04:56:46.234582] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:35.499 [2024-12-14 04:56:46.234600] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:35.499 [2024-12-14 04:56:46.234731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.499 "name": "raid_bdev1", 00:07:35.499 "uuid": "8348dc6f-f3a1-428e-8e18-e28df0647bc5", 00:07:35.499 "strip_size_kb": 0, 00:07:35.499 "state": "online", 00:07:35.499 "raid_level": "raid1", 00:07:35.499 "superblock": true, 00:07:35.499 "num_base_bdevs": 2, 00:07:35.499 "num_base_bdevs_discovered": 2, 00:07:35.499 "num_base_bdevs_operational": 2, 00:07:35.499 "base_bdevs_list": [ 00:07:35.499 { 00:07:35.499 "name": "BaseBdev1", 00:07:35.499 "uuid": "7ab41b8f-c85a-524e-a480-02020f9a5e5a", 00:07:35.499 "is_configured": true, 00:07:35.499 "data_offset": 2048, 00:07:35.499 "data_size": 63488 00:07:35.499 }, 00:07:35.499 { 00:07:35.499 "name": "BaseBdev2", 00:07:35.499 "uuid": "08d6ca4d-1b5a-5f01-835d-e37ff680b17f", 00:07:35.499 "is_configured": true, 00:07:35.499 "data_offset": 2048, 00:07:35.499 "data_size": 63488 00:07:35.499 } 00:07:35.499 ] 00:07:35.499 }' 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.499 04:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.069 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:36.069 04:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:36.069 [2024-12-14 04:56:46.743725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.008 [2024-12-14 04:56:47.663854] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:07:37.008 [2024-12-14 04:56:47.663978] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:37.008 [2024-12-14 04:56:47.664227] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.008 "name": "raid_bdev1", 00:07:37.008 "uuid": "8348dc6f-f3a1-428e-8e18-e28df0647bc5", 00:07:37.008 "strip_size_kb": 0, 00:07:37.008 "state": "online", 00:07:37.008 "raid_level": "raid1", 00:07:37.008 "superblock": true, 00:07:37.008 "num_base_bdevs": 2, 00:07:37.008 "num_base_bdevs_discovered": 1, 00:07:37.008 "num_base_bdevs_operational": 1, 00:07:37.008 "base_bdevs_list": [ 00:07:37.008 { 00:07:37.008 "name": null, 00:07:37.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.008 "is_configured": false, 00:07:37.008 "data_offset": 0, 00:07:37.008 "data_size": 63488 00:07:37.008 }, 00:07:37.008 { 00:07:37.008 "name": "BaseBdev2", 00:07:37.008 "uuid": "08d6ca4d-1b5a-5f01-835d-e37ff680b17f", 00:07:37.008 "is_configured": true, 00:07:37.008 "data_offset": 2048, 00:07:37.008 "data_size": 63488 00:07:37.008 } 00:07:37.008 ] 00:07:37.008 }' 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.008 04:56:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.268 04:56:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:37.268 04:56:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.268 04:56:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.268 [2024-12-14 04:56:48.112677] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:37.268 [2024-12-14 04:56:48.112713] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.268 [2024-12-14 04:56:48.115126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.268 [2024-12-14 04:56:48.115198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.268 [2024-12-14 04:56:48.115253] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.268 [2024-12-14 04:56:48.115266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:37.268 { 00:07:37.268 "results": [ 00:07:37.268 { 00:07:37.268 "job": "raid_bdev1", 00:07:37.268 "core_mask": "0x1", 00:07:37.268 "workload": "randrw", 00:07:37.268 "percentage": 50, 00:07:37.268 "status": "finished", 00:07:37.268 "queue_depth": 1, 00:07:37.268 "io_size": 131072, 00:07:37.268 "runtime": 1.369702, 00:07:37.268 "iops": 23624.846864500454, 00:07:37.268 "mibps": 2953.1058580625568, 00:07:37.268 "io_failed": 0, 00:07:37.268 "io_timeout": 0, 00:07:37.268 "avg_latency_us": 39.83821265008513, 00:07:37.268 "min_latency_us": 21.463755458515283, 00:07:37.268 "max_latency_us": 1373.6803493449781 00:07:37.268 } 00:07:37.268 ], 00:07:37.268 "core_count": 1 00:07:37.268 } 00:07:37.268 04:56:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.268 04:56:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74937 00:07:37.268 04:56:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 74937 ']' 00:07:37.268 04:56:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 74937 00:07:37.268 04:56:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:37.268 04:56:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.268 04:56:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74937 00:07:37.527 04:56:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:37.527 04:56:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:37.527 killing process with pid 74937 00:07:37.528 04:56:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74937' 00:07:37.528 04:56:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 74937 00:07:37.528 [2024-12-14 04:56:48.157015] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:37.528 04:56:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 74937 00:07:37.528 [2024-12-14 04:56:48.172722] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:37.528 04:56:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lxeosimBrH 00:07:37.528 04:56:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:37.787 04:56:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:37.787 04:56:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:37.787 04:56:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:37.787 ************************************ 00:07:37.787 END TEST raid_write_error_test 00:07:37.787 ************************************ 00:07:37.787 04:56:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:37.787 04:56:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:37.787 04:56:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:37.787 00:07:37.787 real 0m3.201s 00:07:37.787 user 0m4.025s 00:07:37.787 sys 0m0.514s 00:07:37.787 04:56:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.787 04:56:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.787 04:56:48 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:37.787 04:56:48 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:37.787 04:56:48 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:07:37.787 04:56:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:37.787 04:56:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.787 04:56:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:37.787 ************************************ 00:07:37.787 START TEST raid_state_function_test 00:07:37.787 ************************************ 00:07:37.787 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:07:37.787 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:37.787 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:37.788 Process raid pid: 75064 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75064 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75064' 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75064 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 75064 ']' 00:07:37.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.788 04:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.788 [2024-12-14 04:56:48.581047] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:37.788 [2024-12-14 04:56:48.581202] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.048 [2024-12-14 04:56:48.742968] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.048 [2024-12-14 04:56:48.788286] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.048 [2024-12-14 04:56:48.829933] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.048 [2024-12-14 04:56:48.829967] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.616 [2024-12-14 04:56:49.399277] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:38.616 [2024-12-14 04:56:49.399375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:38.616 [2024-12-14 04:56:49.399404] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:38.616 [2024-12-14 04:56:49.399415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.616 [2024-12-14 04:56:49.399421] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:38.616 [2024-12-14 04:56:49.399432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.616 "name": "Existed_Raid", 00:07:38.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.616 "strip_size_kb": 64, 00:07:38.616 "state": "configuring", 00:07:38.616 "raid_level": "raid0", 00:07:38.616 "superblock": false, 00:07:38.616 "num_base_bdevs": 3, 00:07:38.616 "num_base_bdevs_discovered": 0, 00:07:38.616 "num_base_bdevs_operational": 3, 00:07:38.616 "base_bdevs_list": [ 00:07:38.616 { 00:07:38.616 "name": "BaseBdev1", 00:07:38.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.616 "is_configured": false, 00:07:38.616 "data_offset": 0, 00:07:38.616 "data_size": 0 00:07:38.616 }, 00:07:38.616 { 00:07:38.616 "name": "BaseBdev2", 00:07:38.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.616 "is_configured": false, 00:07:38.616 "data_offset": 0, 00:07:38.616 "data_size": 0 00:07:38.616 }, 00:07:38.616 { 00:07:38.616 "name": "BaseBdev3", 00:07:38.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.616 "is_configured": false, 00:07:38.616 "data_offset": 0, 00:07:38.616 "data_size": 0 00:07:38.616 } 00:07:38.616 ] 00:07:38.616 }' 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.616 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.185 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:39.185 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.185 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.185 [2024-12-14 04:56:49.870350] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:39.185 [2024-12-14 04:56:49.870391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.186 [2024-12-14 04:56:49.882360] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:39.186 [2024-12-14 04:56:49.882443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:39.186 [2024-12-14 04:56:49.882486] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:39.186 [2024-12-14 04:56:49.882508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:39.186 [2024-12-14 04:56:49.882526] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:39.186 [2024-12-14 04:56:49.882546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.186 [2024-12-14 04:56:49.903222] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.186 BaseBdev1 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.186 [ 00:07:39.186 { 00:07:39.186 "name": "BaseBdev1", 00:07:39.186 "aliases": [ 00:07:39.186 "1088761d-4dec-427f-bb5f-82f9aa7dffc4" 00:07:39.186 ], 00:07:39.186 "product_name": "Malloc disk", 00:07:39.186 "block_size": 512, 00:07:39.186 "num_blocks": 65536, 00:07:39.186 "uuid": "1088761d-4dec-427f-bb5f-82f9aa7dffc4", 00:07:39.186 "assigned_rate_limits": { 00:07:39.186 "rw_ios_per_sec": 0, 00:07:39.186 "rw_mbytes_per_sec": 0, 00:07:39.186 "r_mbytes_per_sec": 0, 00:07:39.186 "w_mbytes_per_sec": 0 00:07:39.186 }, 00:07:39.186 "claimed": true, 00:07:39.186 "claim_type": "exclusive_write", 00:07:39.186 "zoned": false, 00:07:39.186 "supported_io_types": { 00:07:39.186 "read": true, 00:07:39.186 "write": true, 00:07:39.186 "unmap": true, 00:07:39.186 "flush": true, 00:07:39.186 "reset": true, 00:07:39.186 "nvme_admin": false, 00:07:39.186 "nvme_io": false, 00:07:39.186 "nvme_io_md": false, 00:07:39.186 "write_zeroes": true, 00:07:39.186 "zcopy": true, 00:07:39.186 "get_zone_info": false, 00:07:39.186 "zone_management": false, 00:07:39.186 "zone_append": false, 00:07:39.186 "compare": false, 00:07:39.186 "compare_and_write": false, 00:07:39.186 "abort": true, 00:07:39.186 "seek_hole": false, 00:07:39.186 "seek_data": false, 00:07:39.186 "copy": true, 00:07:39.186 "nvme_iov_md": false 00:07:39.186 }, 00:07:39.186 "memory_domains": [ 00:07:39.186 { 00:07:39.186 "dma_device_id": "system", 00:07:39.186 "dma_device_type": 1 00:07:39.186 }, 00:07:39.186 { 00:07:39.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.186 "dma_device_type": 2 00:07:39.186 } 00:07:39.186 ], 00:07:39.186 "driver_specific": {} 00:07:39.186 } 00:07:39.186 ] 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.186 "name": "Existed_Raid", 00:07:39.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.186 "strip_size_kb": 64, 00:07:39.186 "state": "configuring", 00:07:39.186 "raid_level": "raid0", 00:07:39.186 "superblock": false, 00:07:39.186 "num_base_bdevs": 3, 00:07:39.186 "num_base_bdevs_discovered": 1, 00:07:39.186 "num_base_bdevs_operational": 3, 00:07:39.186 "base_bdevs_list": [ 00:07:39.186 { 00:07:39.186 "name": "BaseBdev1", 00:07:39.186 "uuid": "1088761d-4dec-427f-bb5f-82f9aa7dffc4", 00:07:39.186 "is_configured": true, 00:07:39.186 "data_offset": 0, 00:07:39.186 "data_size": 65536 00:07:39.186 }, 00:07:39.186 { 00:07:39.186 "name": "BaseBdev2", 00:07:39.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.186 "is_configured": false, 00:07:39.186 "data_offset": 0, 00:07:39.186 "data_size": 0 00:07:39.186 }, 00:07:39.186 { 00:07:39.186 "name": "BaseBdev3", 00:07:39.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.186 "is_configured": false, 00:07:39.186 "data_offset": 0, 00:07:39.186 "data_size": 0 00:07:39.186 } 00:07:39.186 ] 00:07:39.186 }' 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.186 04:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.755 [2024-12-14 04:56:50.386393] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:39.755 [2024-12-14 04:56:50.386446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.755 [2024-12-14 04:56:50.398411] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.755 [2024-12-14 04:56:50.400276] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:39.755 [2024-12-14 04:56:50.400315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:39.755 [2024-12-14 04:56:50.400324] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:39.755 [2024-12-14 04:56:50.400334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.755 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.755 "name": "Existed_Raid", 00:07:39.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.755 "strip_size_kb": 64, 00:07:39.755 "state": "configuring", 00:07:39.755 "raid_level": "raid0", 00:07:39.756 "superblock": false, 00:07:39.756 "num_base_bdevs": 3, 00:07:39.756 "num_base_bdevs_discovered": 1, 00:07:39.756 "num_base_bdevs_operational": 3, 00:07:39.756 "base_bdevs_list": [ 00:07:39.756 { 00:07:39.756 "name": "BaseBdev1", 00:07:39.756 "uuid": "1088761d-4dec-427f-bb5f-82f9aa7dffc4", 00:07:39.756 "is_configured": true, 00:07:39.756 "data_offset": 0, 00:07:39.756 "data_size": 65536 00:07:39.756 }, 00:07:39.756 { 00:07:39.756 "name": "BaseBdev2", 00:07:39.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.756 "is_configured": false, 00:07:39.756 "data_offset": 0, 00:07:39.756 "data_size": 0 00:07:39.756 }, 00:07:39.756 { 00:07:39.756 "name": "BaseBdev3", 00:07:39.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.756 "is_configured": false, 00:07:39.756 "data_offset": 0, 00:07:39.756 "data_size": 0 00:07:39.756 } 00:07:39.756 ] 00:07:39.756 }' 00:07:39.756 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.756 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.015 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:40.015 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.015 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.015 [2024-12-14 04:56:50.776761] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:40.015 BaseBdev2 00:07:40.015 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.015 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:40.015 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:40.015 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:40.015 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:40.015 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:40.015 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:40.015 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:40.015 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.015 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.015 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.015 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:40.015 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.016 [ 00:07:40.016 { 00:07:40.016 "name": "BaseBdev2", 00:07:40.016 "aliases": [ 00:07:40.016 "517a2a14-3e37-42ac-9f05-199bd39477f8" 00:07:40.016 ], 00:07:40.016 "product_name": "Malloc disk", 00:07:40.016 "block_size": 512, 00:07:40.016 "num_blocks": 65536, 00:07:40.016 "uuid": "517a2a14-3e37-42ac-9f05-199bd39477f8", 00:07:40.016 "assigned_rate_limits": { 00:07:40.016 "rw_ios_per_sec": 0, 00:07:40.016 "rw_mbytes_per_sec": 0, 00:07:40.016 "r_mbytes_per_sec": 0, 00:07:40.016 "w_mbytes_per_sec": 0 00:07:40.016 }, 00:07:40.016 "claimed": true, 00:07:40.016 "claim_type": "exclusive_write", 00:07:40.016 "zoned": false, 00:07:40.016 "supported_io_types": { 00:07:40.016 "read": true, 00:07:40.016 "write": true, 00:07:40.016 "unmap": true, 00:07:40.016 "flush": true, 00:07:40.016 "reset": true, 00:07:40.016 "nvme_admin": false, 00:07:40.016 "nvme_io": false, 00:07:40.016 "nvme_io_md": false, 00:07:40.016 "write_zeroes": true, 00:07:40.016 "zcopy": true, 00:07:40.016 "get_zone_info": false, 00:07:40.016 "zone_management": false, 00:07:40.016 "zone_append": false, 00:07:40.016 "compare": false, 00:07:40.016 "compare_and_write": false, 00:07:40.016 "abort": true, 00:07:40.016 "seek_hole": false, 00:07:40.016 "seek_data": false, 00:07:40.016 "copy": true, 00:07:40.016 "nvme_iov_md": false 00:07:40.016 }, 00:07:40.016 "memory_domains": [ 00:07:40.016 { 00:07:40.016 "dma_device_id": "system", 00:07:40.016 "dma_device_type": 1 00:07:40.016 }, 00:07:40.016 { 00:07:40.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.016 "dma_device_type": 2 00:07:40.016 } 00:07:40.016 ], 00:07:40.016 "driver_specific": {} 00:07:40.016 } 00:07:40.016 ] 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.016 "name": "Existed_Raid", 00:07:40.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.016 "strip_size_kb": 64, 00:07:40.016 "state": "configuring", 00:07:40.016 "raid_level": "raid0", 00:07:40.016 "superblock": false, 00:07:40.016 "num_base_bdevs": 3, 00:07:40.016 "num_base_bdevs_discovered": 2, 00:07:40.016 "num_base_bdevs_operational": 3, 00:07:40.016 "base_bdevs_list": [ 00:07:40.016 { 00:07:40.016 "name": "BaseBdev1", 00:07:40.016 "uuid": "1088761d-4dec-427f-bb5f-82f9aa7dffc4", 00:07:40.016 "is_configured": true, 00:07:40.016 "data_offset": 0, 00:07:40.016 "data_size": 65536 00:07:40.016 }, 00:07:40.016 { 00:07:40.016 "name": "BaseBdev2", 00:07:40.016 "uuid": "517a2a14-3e37-42ac-9f05-199bd39477f8", 00:07:40.016 "is_configured": true, 00:07:40.016 "data_offset": 0, 00:07:40.016 "data_size": 65536 00:07:40.016 }, 00:07:40.016 { 00:07:40.016 "name": "BaseBdev3", 00:07:40.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.016 "is_configured": false, 00:07:40.016 "data_offset": 0, 00:07:40.016 "data_size": 0 00:07:40.016 } 00:07:40.016 ] 00:07:40.016 }' 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.016 04:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.585 [2024-12-14 04:56:51.251227] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:40.585 [2024-12-14 04:56:51.251327] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:40.585 [2024-12-14 04:56:51.251344] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:40.585 [2024-12-14 04:56:51.251655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:40.585 [2024-12-14 04:56:51.251787] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:40.585 [2024-12-14 04:56:51.251797] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:40.585 [2024-12-14 04:56:51.251985] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.585 BaseBdev3 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.585 [ 00:07:40.585 { 00:07:40.585 "name": "BaseBdev3", 00:07:40.585 "aliases": [ 00:07:40.585 "40d80074-3019-487d-9cf4-fcaf3ca0b511" 00:07:40.585 ], 00:07:40.585 "product_name": "Malloc disk", 00:07:40.585 "block_size": 512, 00:07:40.585 "num_blocks": 65536, 00:07:40.585 "uuid": "40d80074-3019-487d-9cf4-fcaf3ca0b511", 00:07:40.585 "assigned_rate_limits": { 00:07:40.585 "rw_ios_per_sec": 0, 00:07:40.585 "rw_mbytes_per_sec": 0, 00:07:40.585 "r_mbytes_per_sec": 0, 00:07:40.585 "w_mbytes_per_sec": 0 00:07:40.585 }, 00:07:40.585 "claimed": true, 00:07:40.585 "claim_type": "exclusive_write", 00:07:40.585 "zoned": false, 00:07:40.585 "supported_io_types": { 00:07:40.585 "read": true, 00:07:40.585 "write": true, 00:07:40.585 "unmap": true, 00:07:40.585 "flush": true, 00:07:40.585 "reset": true, 00:07:40.585 "nvme_admin": false, 00:07:40.585 "nvme_io": false, 00:07:40.585 "nvme_io_md": false, 00:07:40.585 "write_zeroes": true, 00:07:40.585 "zcopy": true, 00:07:40.585 "get_zone_info": false, 00:07:40.585 "zone_management": false, 00:07:40.585 "zone_append": false, 00:07:40.585 "compare": false, 00:07:40.585 "compare_and_write": false, 00:07:40.585 "abort": true, 00:07:40.585 "seek_hole": false, 00:07:40.585 "seek_data": false, 00:07:40.585 "copy": true, 00:07:40.585 "nvme_iov_md": false 00:07:40.585 }, 00:07:40.585 "memory_domains": [ 00:07:40.585 { 00:07:40.585 "dma_device_id": "system", 00:07:40.585 "dma_device_type": 1 00:07:40.585 }, 00:07:40.585 { 00:07:40.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.585 "dma_device_type": 2 00:07:40.585 } 00:07:40.585 ], 00:07:40.585 "driver_specific": {} 00:07:40.585 } 00:07:40.585 ] 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.585 "name": "Existed_Raid", 00:07:40.585 "uuid": "d2148cf5-878c-4d95-9593-94af187194e1", 00:07:40.585 "strip_size_kb": 64, 00:07:40.585 "state": "online", 00:07:40.585 "raid_level": "raid0", 00:07:40.585 "superblock": false, 00:07:40.585 "num_base_bdevs": 3, 00:07:40.585 "num_base_bdevs_discovered": 3, 00:07:40.585 "num_base_bdevs_operational": 3, 00:07:40.585 "base_bdevs_list": [ 00:07:40.585 { 00:07:40.585 "name": "BaseBdev1", 00:07:40.585 "uuid": "1088761d-4dec-427f-bb5f-82f9aa7dffc4", 00:07:40.585 "is_configured": true, 00:07:40.585 "data_offset": 0, 00:07:40.585 "data_size": 65536 00:07:40.585 }, 00:07:40.585 { 00:07:40.585 "name": "BaseBdev2", 00:07:40.585 "uuid": "517a2a14-3e37-42ac-9f05-199bd39477f8", 00:07:40.585 "is_configured": true, 00:07:40.585 "data_offset": 0, 00:07:40.585 "data_size": 65536 00:07:40.585 }, 00:07:40.585 { 00:07:40.585 "name": "BaseBdev3", 00:07:40.585 "uuid": "40d80074-3019-487d-9cf4-fcaf3ca0b511", 00:07:40.585 "is_configured": true, 00:07:40.585 "data_offset": 0, 00:07:40.585 "data_size": 65536 00:07:40.585 } 00:07:40.585 ] 00:07:40.585 }' 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.585 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.844 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:40.844 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:40.844 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:40.844 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:40.844 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:40.844 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:40.844 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:40.844 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:40.844 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.844 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.104 [2024-12-14 04:56:51.726728] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:41.104 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.104 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:41.104 "name": "Existed_Raid", 00:07:41.104 "aliases": [ 00:07:41.104 "d2148cf5-878c-4d95-9593-94af187194e1" 00:07:41.104 ], 00:07:41.104 "product_name": "Raid Volume", 00:07:41.104 "block_size": 512, 00:07:41.104 "num_blocks": 196608, 00:07:41.104 "uuid": "d2148cf5-878c-4d95-9593-94af187194e1", 00:07:41.104 "assigned_rate_limits": { 00:07:41.104 "rw_ios_per_sec": 0, 00:07:41.104 "rw_mbytes_per_sec": 0, 00:07:41.104 "r_mbytes_per_sec": 0, 00:07:41.104 "w_mbytes_per_sec": 0 00:07:41.104 }, 00:07:41.104 "claimed": false, 00:07:41.104 "zoned": false, 00:07:41.104 "supported_io_types": { 00:07:41.104 "read": true, 00:07:41.104 "write": true, 00:07:41.104 "unmap": true, 00:07:41.104 "flush": true, 00:07:41.104 "reset": true, 00:07:41.104 "nvme_admin": false, 00:07:41.104 "nvme_io": false, 00:07:41.104 "nvme_io_md": false, 00:07:41.104 "write_zeroes": true, 00:07:41.104 "zcopy": false, 00:07:41.104 "get_zone_info": false, 00:07:41.104 "zone_management": false, 00:07:41.104 "zone_append": false, 00:07:41.104 "compare": false, 00:07:41.104 "compare_and_write": false, 00:07:41.104 "abort": false, 00:07:41.104 "seek_hole": false, 00:07:41.104 "seek_data": false, 00:07:41.104 "copy": false, 00:07:41.104 "nvme_iov_md": false 00:07:41.104 }, 00:07:41.104 "memory_domains": [ 00:07:41.104 { 00:07:41.104 "dma_device_id": "system", 00:07:41.104 "dma_device_type": 1 00:07:41.104 }, 00:07:41.104 { 00:07:41.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.104 "dma_device_type": 2 00:07:41.104 }, 00:07:41.104 { 00:07:41.104 "dma_device_id": "system", 00:07:41.104 "dma_device_type": 1 00:07:41.104 }, 00:07:41.104 { 00:07:41.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.104 "dma_device_type": 2 00:07:41.104 }, 00:07:41.104 { 00:07:41.104 "dma_device_id": "system", 00:07:41.104 "dma_device_type": 1 00:07:41.104 }, 00:07:41.104 { 00:07:41.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.104 "dma_device_type": 2 00:07:41.104 } 00:07:41.104 ], 00:07:41.104 "driver_specific": { 00:07:41.104 "raid": { 00:07:41.104 "uuid": "d2148cf5-878c-4d95-9593-94af187194e1", 00:07:41.104 "strip_size_kb": 64, 00:07:41.104 "state": "online", 00:07:41.104 "raid_level": "raid0", 00:07:41.104 "superblock": false, 00:07:41.104 "num_base_bdevs": 3, 00:07:41.104 "num_base_bdevs_discovered": 3, 00:07:41.104 "num_base_bdevs_operational": 3, 00:07:41.104 "base_bdevs_list": [ 00:07:41.104 { 00:07:41.104 "name": "BaseBdev1", 00:07:41.104 "uuid": "1088761d-4dec-427f-bb5f-82f9aa7dffc4", 00:07:41.104 "is_configured": true, 00:07:41.104 "data_offset": 0, 00:07:41.104 "data_size": 65536 00:07:41.104 }, 00:07:41.104 { 00:07:41.104 "name": "BaseBdev2", 00:07:41.104 "uuid": "517a2a14-3e37-42ac-9f05-199bd39477f8", 00:07:41.104 "is_configured": true, 00:07:41.104 "data_offset": 0, 00:07:41.104 "data_size": 65536 00:07:41.104 }, 00:07:41.104 { 00:07:41.104 "name": "BaseBdev3", 00:07:41.104 "uuid": "40d80074-3019-487d-9cf4-fcaf3ca0b511", 00:07:41.104 "is_configured": true, 00:07:41.104 "data_offset": 0, 00:07:41.104 "data_size": 65536 00:07:41.104 } 00:07:41.104 ] 00:07:41.104 } 00:07:41.104 } 00:07:41.104 }' 00:07:41.104 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:41.104 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:41.104 BaseBdev2 00:07:41.104 BaseBdev3' 00:07:41.104 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:41.104 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:41.104 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:41.104 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.105 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.365 [2024-12-14 04:56:51.986053] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:41.365 [2024-12-14 04:56:51.986130] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.365 [2024-12-14 04:56:51.986228] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.365 04:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.365 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:41.365 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:41.365 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:41.365 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:41.365 04:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:41.365 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:41.365 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.365 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:41.365 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.365 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.365 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.365 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.365 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.365 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.365 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.365 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.365 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.365 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.365 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.365 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.365 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.365 "name": "Existed_Raid", 00:07:41.365 "uuid": "d2148cf5-878c-4d95-9593-94af187194e1", 00:07:41.365 "strip_size_kb": 64, 00:07:41.365 "state": "offline", 00:07:41.365 "raid_level": "raid0", 00:07:41.365 "superblock": false, 00:07:41.365 "num_base_bdevs": 3, 00:07:41.365 "num_base_bdevs_discovered": 2, 00:07:41.365 "num_base_bdevs_operational": 2, 00:07:41.365 "base_bdevs_list": [ 00:07:41.365 { 00:07:41.365 "name": null, 00:07:41.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.365 "is_configured": false, 00:07:41.365 "data_offset": 0, 00:07:41.365 "data_size": 65536 00:07:41.365 }, 00:07:41.365 { 00:07:41.365 "name": "BaseBdev2", 00:07:41.365 "uuid": "517a2a14-3e37-42ac-9f05-199bd39477f8", 00:07:41.365 "is_configured": true, 00:07:41.365 "data_offset": 0, 00:07:41.365 "data_size": 65536 00:07:41.365 }, 00:07:41.365 { 00:07:41.365 "name": "BaseBdev3", 00:07:41.365 "uuid": "40d80074-3019-487d-9cf4-fcaf3ca0b511", 00:07:41.365 "is_configured": true, 00:07:41.365 "data_offset": 0, 00:07:41.365 "data_size": 65536 00:07:41.365 } 00:07:41.365 ] 00:07:41.365 }' 00:07:41.365 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.365 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.624 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:41.624 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:41.624 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.624 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.624 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.624 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:41.624 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.624 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:41.624 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:41.624 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:41.624 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.624 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.624 [2024-12-14 04:56:52.477192] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:41.624 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.624 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:41.624 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:41.624 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.624 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:41.624 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.624 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.883 [2024-12-14 04:56:52.544349] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:41.883 [2024-12-14 04:56:52.544411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.883 BaseBdev2 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.883 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.883 [ 00:07:41.883 { 00:07:41.883 "name": "BaseBdev2", 00:07:41.883 "aliases": [ 00:07:41.883 "0e7b4682-b220-4690-a635-0428d43f72ba" 00:07:41.883 ], 00:07:41.883 "product_name": "Malloc disk", 00:07:41.883 "block_size": 512, 00:07:41.883 "num_blocks": 65536, 00:07:41.883 "uuid": "0e7b4682-b220-4690-a635-0428d43f72ba", 00:07:41.883 "assigned_rate_limits": { 00:07:41.883 "rw_ios_per_sec": 0, 00:07:41.883 "rw_mbytes_per_sec": 0, 00:07:41.883 "r_mbytes_per_sec": 0, 00:07:41.883 "w_mbytes_per_sec": 0 00:07:41.883 }, 00:07:41.883 "claimed": false, 00:07:41.883 "zoned": false, 00:07:41.883 "supported_io_types": { 00:07:41.883 "read": true, 00:07:41.883 "write": true, 00:07:41.883 "unmap": true, 00:07:41.884 "flush": true, 00:07:41.884 "reset": true, 00:07:41.884 "nvme_admin": false, 00:07:41.884 "nvme_io": false, 00:07:41.884 "nvme_io_md": false, 00:07:41.884 "write_zeroes": true, 00:07:41.884 "zcopy": true, 00:07:41.884 "get_zone_info": false, 00:07:41.884 "zone_management": false, 00:07:41.884 "zone_append": false, 00:07:41.884 "compare": false, 00:07:41.884 "compare_and_write": false, 00:07:41.884 "abort": true, 00:07:41.884 "seek_hole": false, 00:07:41.884 "seek_data": false, 00:07:41.884 "copy": true, 00:07:41.884 "nvme_iov_md": false 00:07:41.884 }, 00:07:41.884 "memory_domains": [ 00:07:41.884 { 00:07:41.884 "dma_device_id": "system", 00:07:41.884 "dma_device_type": 1 00:07:41.884 }, 00:07:41.884 { 00:07:41.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.884 "dma_device_type": 2 00:07:41.884 } 00:07:41.884 ], 00:07:41.884 "driver_specific": {} 00:07:41.884 } 00:07:41.884 ] 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.884 BaseBdev3 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.884 [ 00:07:41.884 { 00:07:41.884 "name": "BaseBdev3", 00:07:41.884 "aliases": [ 00:07:41.884 "8fca5941-2401-4ee8-a6a4-67f1797e4f29" 00:07:41.884 ], 00:07:41.884 "product_name": "Malloc disk", 00:07:41.884 "block_size": 512, 00:07:41.884 "num_blocks": 65536, 00:07:41.884 "uuid": "8fca5941-2401-4ee8-a6a4-67f1797e4f29", 00:07:41.884 "assigned_rate_limits": { 00:07:41.884 "rw_ios_per_sec": 0, 00:07:41.884 "rw_mbytes_per_sec": 0, 00:07:41.884 "r_mbytes_per_sec": 0, 00:07:41.884 "w_mbytes_per_sec": 0 00:07:41.884 }, 00:07:41.884 "claimed": false, 00:07:41.884 "zoned": false, 00:07:41.884 "supported_io_types": { 00:07:41.884 "read": true, 00:07:41.884 "write": true, 00:07:41.884 "unmap": true, 00:07:41.884 "flush": true, 00:07:41.884 "reset": true, 00:07:41.884 "nvme_admin": false, 00:07:41.884 "nvme_io": false, 00:07:41.884 "nvme_io_md": false, 00:07:41.884 "write_zeroes": true, 00:07:41.884 "zcopy": true, 00:07:41.884 "get_zone_info": false, 00:07:41.884 "zone_management": false, 00:07:41.884 "zone_append": false, 00:07:41.884 "compare": false, 00:07:41.884 "compare_and_write": false, 00:07:41.884 "abort": true, 00:07:41.884 "seek_hole": false, 00:07:41.884 "seek_data": false, 00:07:41.884 "copy": true, 00:07:41.884 "nvme_iov_md": false 00:07:41.884 }, 00:07:41.884 "memory_domains": [ 00:07:41.884 { 00:07:41.884 "dma_device_id": "system", 00:07:41.884 "dma_device_type": 1 00:07:41.884 }, 00:07:41.884 { 00:07:41.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.884 "dma_device_type": 2 00:07:41.884 } 00:07:41.884 ], 00:07:41.884 "driver_specific": {} 00:07:41.884 } 00:07:41.884 ] 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.884 [2024-12-14 04:56:52.719135] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:41.884 [2024-12-14 04:56:52.719265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:41.884 [2024-12-14 04:56:52.719306] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:41.884 [2024-12-14 04:56:52.721062] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.884 "name": "Existed_Raid", 00:07:41.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.884 "strip_size_kb": 64, 00:07:41.884 "state": "configuring", 00:07:41.884 "raid_level": "raid0", 00:07:41.884 "superblock": false, 00:07:41.884 "num_base_bdevs": 3, 00:07:41.884 "num_base_bdevs_discovered": 2, 00:07:41.884 "num_base_bdevs_operational": 3, 00:07:41.884 "base_bdevs_list": [ 00:07:41.884 { 00:07:41.884 "name": "BaseBdev1", 00:07:41.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.884 "is_configured": false, 00:07:41.884 "data_offset": 0, 00:07:41.884 "data_size": 0 00:07:41.884 }, 00:07:41.884 { 00:07:41.884 "name": "BaseBdev2", 00:07:41.884 "uuid": "0e7b4682-b220-4690-a635-0428d43f72ba", 00:07:41.884 "is_configured": true, 00:07:41.884 "data_offset": 0, 00:07:41.884 "data_size": 65536 00:07:41.884 }, 00:07:41.884 { 00:07:41.884 "name": "BaseBdev3", 00:07:41.884 "uuid": "8fca5941-2401-4ee8-a6a4-67f1797e4f29", 00:07:41.884 "is_configured": true, 00:07:41.884 "data_offset": 0, 00:07:41.884 "data_size": 65536 00:07:41.884 } 00:07:41.884 ] 00:07:41.884 }' 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.884 04:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.452 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:42.453 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.453 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.453 [2024-12-14 04:56:53.126419] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:42.453 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.453 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:42.453 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.453 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.453 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.453 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.453 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:42.453 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.453 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.453 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.453 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.453 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.453 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.453 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.453 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.453 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.453 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.453 "name": "Existed_Raid", 00:07:42.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.453 "strip_size_kb": 64, 00:07:42.453 "state": "configuring", 00:07:42.453 "raid_level": "raid0", 00:07:42.453 "superblock": false, 00:07:42.453 "num_base_bdevs": 3, 00:07:42.453 "num_base_bdevs_discovered": 1, 00:07:42.453 "num_base_bdevs_operational": 3, 00:07:42.453 "base_bdevs_list": [ 00:07:42.453 { 00:07:42.453 "name": "BaseBdev1", 00:07:42.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.453 "is_configured": false, 00:07:42.453 "data_offset": 0, 00:07:42.453 "data_size": 0 00:07:42.453 }, 00:07:42.453 { 00:07:42.453 "name": null, 00:07:42.453 "uuid": "0e7b4682-b220-4690-a635-0428d43f72ba", 00:07:42.453 "is_configured": false, 00:07:42.453 "data_offset": 0, 00:07:42.453 "data_size": 65536 00:07:42.453 }, 00:07:42.453 { 00:07:42.453 "name": "BaseBdev3", 00:07:42.453 "uuid": "8fca5941-2401-4ee8-a6a4-67f1797e4f29", 00:07:42.453 "is_configured": true, 00:07:42.453 "data_offset": 0, 00:07:42.453 "data_size": 65536 00:07:42.453 } 00:07:42.453 ] 00:07:42.453 }' 00:07:42.453 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.453 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.712 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.712 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.712 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.712 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:42.712 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.972 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:42.972 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:42.972 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.972 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.972 [2024-12-14 04:56:53.608635] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:42.972 BaseBdev1 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.973 [ 00:07:42.973 { 00:07:42.973 "name": "BaseBdev1", 00:07:42.973 "aliases": [ 00:07:42.973 "ec12296d-8a97-480b-90db-413e51e94d91" 00:07:42.973 ], 00:07:42.973 "product_name": "Malloc disk", 00:07:42.973 "block_size": 512, 00:07:42.973 "num_blocks": 65536, 00:07:42.973 "uuid": "ec12296d-8a97-480b-90db-413e51e94d91", 00:07:42.973 "assigned_rate_limits": { 00:07:42.973 "rw_ios_per_sec": 0, 00:07:42.973 "rw_mbytes_per_sec": 0, 00:07:42.973 "r_mbytes_per_sec": 0, 00:07:42.973 "w_mbytes_per_sec": 0 00:07:42.973 }, 00:07:42.973 "claimed": true, 00:07:42.973 "claim_type": "exclusive_write", 00:07:42.973 "zoned": false, 00:07:42.973 "supported_io_types": { 00:07:42.973 "read": true, 00:07:42.973 "write": true, 00:07:42.973 "unmap": true, 00:07:42.973 "flush": true, 00:07:42.973 "reset": true, 00:07:42.973 "nvme_admin": false, 00:07:42.973 "nvme_io": false, 00:07:42.973 "nvme_io_md": false, 00:07:42.973 "write_zeroes": true, 00:07:42.973 "zcopy": true, 00:07:42.973 "get_zone_info": false, 00:07:42.973 "zone_management": false, 00:07:42.973 "zone_append": false, 00:07:42.973 "compare": false, 00:07:42.973 "compare_and_write": false, 00:07:42.973 "abort": true, 00:07:42.973 "seek_hole": false, 00:07:42.973 "seek_data": false, 00:07:42.973 "copy": true, 00:07:42.973 "nvme_iov_md": false 00:07:42.973 }, 00:07:42.973 "memory_domains": [ 00:07:42.973 { 00:07:42.973 "dma_device_id": "system", 00:07:42.973 "dma_device_type": 1 00:07:42.973 }, 00:07:42.973 { 00:07:42.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.973 "dma_device_type": 2 00:07:42.973 } 00:07:42.973 ], 00:07:42.973 "driver_specific": {} 00:07:42.973 } 00:07:42.973 ] 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.973 "name": "Existed_Raid", 00:07:42.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.973 "strip_size_kb": 64, 00:07:42.973 "state": "configuring", 00:07:42.973 "raid_level": "raid0", 00:07:42.973 "superblock": false, 00:07:42.973 "num_base_bdevs": 3, 00:07:42.973 "num_base_bdevs_discovered": 2, 00:07:42.973 "num_base_bdevs_operational": 3, 00:07:42.973 "base_bdevs_list": [ 00:07:42.973 { 00:07:42.973 "name": "BaseBdev1", 00:07:42.973 "uuid": "ec12296d-8a97-480b-90db-413e51e94d91", 00:07:42.973 "is_configured": true, 00:07:42.973 "data_offset": 0, 00:07:42.973 "data_size": 65536 00:07:42.973 }, 00:07:42.973 { 00:07:42.973 "name": null, 00:07:42.973 "uuid": "0e7b4682-b220-4690-a635-0428d43f72ba", 00:07:42.973 "is_configured": false, 00:07:42.973 "data_offset": 0, 00:07:42.973 "data_size": 65536 00:07:42.973 }, 00:07:42.973 { 00:07:42.973 "name": "BaseBdev3", 00:07:42.973 "uuid": "8fca5941-2401-4ee8-a6a4-67f1797e4f29", 00:07:42.973 "is_configured": true, 00:07:42.973 "data_offset": 0, 00:07:42.973 "data_size": 65536 00:07:42.973 } 00:07:42.973 ] 00:07:42.973 }' 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.973 04:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.233 [2024-12-14 04:56:54.055925] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.233 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.492 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.492 "name": "Existed_Raid", 00:07:43.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.492 "strip_size_kb": 64, 00:07:43.492 "state": "configuring", 00:07:43.492 "raid_level": "raid0", 00:07:43.492 "superblock": false, 00:07:43.492 "num_base_bdevs": 3, 00:07:43.492 "num_base_bdevs_discovered": 1, 00:07:43.492 "num_base_bdevs_operational": 3, 00:07:43.492 "base_bdevs_list": [ 00:07:43.492 { 00:07:43.492 "name": "BaseBdev1", 00:07:43.492 "uuid": "ec12296d-8a97-480b-90db-413e51e94d91", 00:07:43.492 "is_configured": true, 00:07:43.492 "data_offset": 0, 00:07:43.492 "data_size": 65536 00:07:43.492 }, 00:07:43.492 { 00:07:43.492 "name": null, 00:07:43.492 "uuid": "0e7b4682-b220-4690-a635-0428d43f72ba", 00:07:43.492 "is_configured": false, 00:07:43.492 "data_offset": 0, 00:07:43.492 "data_size": 65536 00:07:43.492 }, 00:07:43.492 { 00:07:43.492 "name": null, 00:07:43.492 "uuid": "8fca5941-2401-4ee8-a6a4-67f1797e4f29", 00:07:43.492 "is_configured": false, 00:07:43.492 "data_offset": 0, 00:07:43.492 "data_size": 65536 00:07:43.492 } 00:07:43.492 ] 00:07:43.492 }' 00:07:43.492 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.492 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.752 [2024-12-14 04:56:54.523288] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.752 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.752 "name": "Existed_Raid", 00:07:43.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.752 "strip_size_kb": 64, 00:07:43.753 "state": "configuring", 00:07:43.753 "raid_level": "raid0", 00:07:43.753 "superblock": false, 00:07:43.753 "num_base_bdevs": 3, 00:07:43.753 "num_base_bdevs_discovered": 2, 00:07:43.753 "num_base_bdevs_operational": 3, 00:07:43.753 "base_bdevs_list": [ 00:07:43.753 { 00:07:43.753 "name": "BaseBdev1", 00:07:43.753 "uuid": "ec12296d-8a97-480b-90db-413e51e94d91", 00:07:43.753 "is_configured": true, 00:07:43.753 "data_offset": 0, 00:07:43.753 "data_size": 65536 00:07:43.753 }, 00:07:43.753 { 00:07:43.753 "name": null, 00:07:43.753 "uuid": "0e7b4682-b220-4690-a635-0428d43f72ba", 00:07:43.753 "is_configured": false, 00:07:43.753 "data_offset": 0, 00:07:43.753 "data_size": 65536 00:07:43.753 }, 00:07:43.753 { 00:07:43.753 "name": "BaseBdev3", 00:07:43.753 "uuid": "8fca5941-2401-4ee8-a6a4-67f1797e4f29", 00:07:43.753 "is_configured": true, 00:07:43.753 "data_offset": 0, 00:07:43.753 "data_size": 65536 00:07:43.753 } 00:07:43.753 ] 00:07:43.753 }' 00:07:43.753 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.753 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.323 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:44.323 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.323 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.323 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.323 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.323 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:44.323 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:44.323 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.323 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.323 [2024-12-14 04:56:54.986699] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:44.323 04:56:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.323 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:44.323 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.323 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.323 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:44.323 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.323 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:44.323 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.323 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.323 04:56:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.323 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.323 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.323 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.323 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.323 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.323 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.323 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.323 "name": "Existed_Raid", 00:07:44.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.323 "strip_size_kb": 64, 00:07:44.323 "state": "configuring", 00:07:44.323 "raid_level": "raid0", 00:07:44.323 "superblock": false, 00:07:44.323 "num_base_bdevs": 3, 00:07:44.323 "num_base_bdevs_discovered": 1, 00:07:44.323 "num_base_bdevs_operational": 3, 00:07:44.323 "base_bdevs_list": [ 00:07:44.323 { 00:07:44.323 "name": null, 00:07:44.323 "uuid": "ec12296d-8a97-480b-90db-413e51e94d91", 00:07:44.323 "is_configured": false, 00:07:44.323 "data_offset": 0, 00:07:44.323 "data_size": 65536 00:07:44.323 }, 00:07:44.323 { 00:07:44.323 "name": null, 00:07:44.323 "uuid": "0e7b4682-b220-4690-a635-0428d43f72ba", 00:07:44.323 "is_configured": false, 00:07:44.323 "data_offset": 0, 00:07:44.323 "data_size": 65536 00:07:44.323 }, 00:07:44.323 { 00:07:44.323 "name": "BaseBdev3", 00:07:44.323 "uuid": "8fca5941-2401-4ee8-a6a4-67f1797e4f29", 00:07:44.323 "is_configured": true, 00:07:44.323 "data_offset": 0, 00:07:44.323 "data_size": 65536 00:07:44.323 } 00:07:44.323 ] 00:07:44.323 }' 00:07:44.323 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.323 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.583 [2024-12-14 04:56:55.452433] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.583 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.842 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.842 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.842 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.842 "name": "Existed_Raid", 00:07:44.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.842 "strip_size_kb": 64, 00:07:44.842 "state": "configuring", 00:07:44.842 "raid_level": "raid0", 00:07:44.842 "superblock": false, 00:07:44.842 "num_base_bdevs": 3, 00:07:44.842 "num_base_bdevs_discovered": 2, 00:07:44.842 "num_base_bdevs_operational": 3, 00:07:44.842 "base_bdevs_list": [ 00:07:44.842 { 00:07:44.842 "name": null, 00:07:44.842 "uuid": "ec12296d-8a97-480b-90db-413e51e94d91", 00:07:44.842 "is_configured": false, 00:07:44.842 "data_offset": 0, 00:07:44.842 "data_size": 65536 00:07:44.842 }, 00:07:44.842 { 00:07:44.842 "name": "BaseBdev2", 00:07:44.842 "uuid": "0e7b4682-b220-4690-a635-0428d43f72ba", 00:07:44.842 "is_configured": true, 00:07:44.842 "data_offset": 0, 00:07:44.842 "data_size": 65536 00:07:44.842 }, 00:07:44.842 { 00:07:44.842 "name": "BaseBdev3", 00:07:44.842 "uuid": "8fca5941-2401-4ee8-a6a4-67f1797e4f29", 00:07:44.842 "is_configured": true, 00:07:44.842 "data_offset": 0, 00:07:44.842 "data_size": 65536 00:07:44.842 } 00:07:44.842 ] 00:07:44.842 }' 00:07:44.842 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.842 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ec12296d-8a97-480b-90db-413e51e94d91 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.103 [2024-12-14 04:56:55.890686] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:45.103 [2024-12-14 04:56:55.890789] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:07:45.103 [2024-12-14 04:56:55.890818] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:45.103 [2024-12-14 04:56:55.891088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:45.103 [2024-12-14 04:56:55.891286] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:07:45.103 [2024-12-14 04:56:55.891331] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:07:45.103 [2024-12-14 04:56:55.891562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.103 NewBaseBdev 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.103 [ 00:07:45.103 { 00:07:45.103 "name": "NewBaseBdev", 00:07:45.103 "aliases": [ 00:07:45.103 "ec12296d-8a97-480b-90db-413e51e94d91" 00:07:45.103 ], 00:07:45.103 "product_name": "Malloc disk", 00:07:45.103 "block_size": 512, 00:07:45.103 "num_blocks": 65536, 00:07:45.103 "uuid": "ec12296d-8a97-480b-90db-413e51e94d91", 00:07:45.103 "assigned_rate_limits": { 00:07:45.103 "rw_ios_per_sec": 0, 00:07:45.103 "rw_mbytes_per_sec": 0, 00:07:45.103 "r_mbytes_per_sec": 0, 00:07:45.103 "w_mbytes_per_sec": 0 00:07:45.103 }, 00:07:45.103 "claimed": true, 00:07:45.103 "claim_type": "exclusive_write", 00:07:45.103 "zoned": false, 00:07:45.103 "supported_io_types": { 00:07:45.103 "read": true, 00:07:45.103 "write": true, 00:07:45.103 "unmap": true, 00:07:45.103 "flush": true, 00:07:45.103 "reset": true, 00:07:45.103 "nvme_admin": false, 00:07:45.103 "nvme_io": false, 00:07:45.103 "nvme_io_md": false, 00:07:45.103 "write_zeroes": true, 00:07:45.103 "zcopy": true, 00:07:45.103 "get_zone_info": false, 00:07:45.103 "zone_management": false, 00:07:45.103 "zone_append": false, 00:07:45.103 "compare": false, 00:07:45.103 "compare_and_write": false, 00:07:45.103 "abort": true, 00:07:45.103 "seek_hole": false, 00:07:45.103 "seek_data": false, 00:07:45.103 "copy": true, 00:07:45.103 "nvme_iov_md": false 00:07:45.103 }, 00:07:45.103 "memory_domains": [ 00:07:45.103 { 00:07:45.103 "dma_device_id": "system", 00:07:45.103 "dma_device_type": 1 00:07:45.103 }, 00:07:45.103 { 00:07:45.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.103 "dma_device_type": 2 00:07:45.103 } 00:07:45.103 ], 00:07:45.103 "driver_specific": {} 00:07:45.103 } 00:07:45.103 ] 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.103 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.104 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.104 "name": "Existed_Raid", 00:07:45.104 "uuid": "75b921a7-0713-4a35-ac20-2e978ac6cfd8", 00:07:45.104 "strip_size_kb": 64, 00:07:45.104 "state": "online", 00:07:45.104 "raid_level": "raid0", 00:07:45.104 "superblock": false, 00:07:45.104 "num_base_bdevs": 3, 00:07:45.104 "num_base_bdevs_discovered": 3, 00:07:45.104 "num_base_bdevs_operational": 3, 00:07:45.104 "base_bdevs_list": [ 00:07:45.104 { 00:07:45.104 "name": "NewBaseBdev", 00:07:45.104 "uuid": "ec12296d-8a97-480b-90db-413e51e94d91", 00:07:45.104 "is_configured": true, 00:07:45.104 "data_offset": 0, 00:07:45.104 "data_size": 65536 00:07:45.104 }, 00:07:45.104 { 00:07:45.104 "name": "BaseBdev2", 00:07:45.104 "uuid": "0e7b4682-b220-4690-a635-0428d43f72ba", 00:07:45.104 "is_configured": true, 00:07:45.104 "data_offset": 0, 00:07:45.104 "data_size": 65536 00:07:45.104 }, 00:07:45.104 { 00:07:45.104 "name": "BaseBdev3", 00:07:45.104 "uuid": "8fca5941-2401-4ee8-a6a4-67f1797e4f29", 00:07:45.104 "is_configured": true, 00:07:45.104 "data_offset": 0, 00:07:45.104 "data_size": 65536 00:07:45.104 } 00:07:45.104 ] 00:07:45.104 }' 00:07:45.104 04:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.104 04:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.680 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:45.680 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:45.680 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:45.680 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:45.680 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:45.680 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:45.680 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:45.680 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.680 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.680 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:45.680 [2024-12-14 04:56:56.330329] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.680 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.680 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:45.680 "name": "Existed_Raid", 00:07:45.680 "aliases": [ 00:07:45.680 "75b921a7-0713-4a35-ac20-2e978ac6cfd8" 00:07:45.680 ], 00:07:45.680 "product_name": "Raid Volume", 00:07:45.680 "block_size": 512, 00:07:45.680 "num_blocks": 196608, 00:07:45.680 "uuid": "75b921a7-0713-4a35-ac20-2e978ac6cfd8", 00:07:45.681 "assigned_rate_limits": { 00:07:45.681 "rw_ios_per_sec": 0, 00:07:45.681 "rw_mbytes_per_sec": 0, 00:07:45.681 "r_mbytes_per_sec": 0, 00:07:45.681 "w_mbytes_per_sec": 0 00:07:45.681 }, 00:07:45.681 "claimed": false, 00:07:45.681 "zoned": false, 00:07:45.681 "supported_io_types": { 00:07:45.681 "read": true, 00:07:45.681 "write": true, 00:07:45.681 "unmap": true, 00:07:45.681 "flush": true, 00:07:45.681 "reset": true, 00:07:45.681 "nvme_admin": false, 00:07:45.681 "nvme_io": false, 00:07:45.681 "nvme_io_md": false, 00:07:45.681 "write_zeroes": true, 00:07:45.681 "zcopy": false, 00:07:45.681 "get_zone_info": false, 00:07:45.681 "zone_management": false, 00:07:45.681 "zone_append": false, 00:07:45.681 "compare": false, 00:07:45.681 "compare_and_write": false, 00:07:45.681 "abort": false, 00:07:45.681 "seek_hole": false, 00:07:45.681 "seek_data": false, 00:07:45.681 "copy": false, 00:07:45.681 "nvme_iov_md": false 00:07:45.681 }, 00:07:45.681 "memory_domains": [ 00:07:45.681 { 00:07:45.681 "dma_device_id": "system", 00:07:45.681 "dma_device_type": 1 00:07:45.681 }, 00:07:45.681 { 00:07:45.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.681 "dma_device_type": 2 00:07:45.681 }, 00:07:45.681 { 00:07:45.681 "dma_device_id": "system", 00:07:45.681 "dma_device_type": 1 00:07:45.681 }, 00:07:45.681 { 00:07:45.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.681 "dma_device_type": 2 00:07:45.681 }, 00:07:45.681 { 00:07:45.681 "dma_device_id": "system", 00:07:45.681 "dma_device_type": 1 00:07:45.681 }, 00:07:45.681 { 00:07:45.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.681 "dma_device_type": 2 00:07:45.681 } 00:07:45.681 ], 00:07:45.681 "driver_specific": { 00:07:45.681 "raid": { 00:07:45.681 "uuid": "75b921a7-0713-4a35-ac20-2e978ac6cfd8", 00:07:45.681 "strip_size_kb": 64, 00:07:45.681 "state": "online", 00:07:45.681 "raid_level": "raid0", 00:07:45.681 "superblock": false, 00:07:45.681 "num_base_bdevs": 3, 00:07:45.681 "num_base_bdevs_discovered": 3, 00:07:45.681 "num_base_bdevs_operational": 3, 00:07:45.681 "base_bdevs_list": [ 00:07:45.681 { 00:07:45.681 "name": "NewBaseBdev", 00:07:45.681 "uuid": "ec12296d-8a97-480b-90db-413e51e94d91", 00:07:45.681 "is_configured": true, 00:07:45.681 "data_offset": 0, 00:07:45.681 "data_size": 65536 00:07:45.681 }, 00:07:45.681 { 00:07:45.681 "name": "BaseBdev2", 00:07:45.681 "uuid": "0e7b4682-b220-4690-a635-0428d43f72ba", 00:07:45.681 "is_configured": true, 00:07:45.681 "data_offset": 0, 00:07:45.681 "data_size": 65536 00:07:45.681 }, 00:07:45.681 { 00:07:45.681 "name": "BaseBdev3", 00:07:45.681 "uuid": "8fca5941-2401-4ee8-a6a4-67f1797e4f29", 00:07:45.681 "is_configured": true, 00:07:45.681 "data_offset": 0, 00:07:45.681 "data_size": 65536 00:07:45.681 } 00:07:45.681 ] 00:07:45.681 } 00:07:45.681 } 00:07:45.681 }' 00:07:45.681 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:45.681 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:45.681 BaseBdev2 00:07:45.681 BaseBdev3' 00:07:45.681 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.681 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:45.681 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.681 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.681 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:45.681 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.681 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.681 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.681 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.681 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.681 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.681 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:45.681 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.681 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.681 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.681 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.950 [2024-12-14 04:56:56.601547] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:45.950 [2024-12-14 04:56:56.601573] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:45.950 [2024-12-14 04:56:56.601640] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.950 [2024-12-14 04:56:56.601690] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.950 [2024-12-14 04:56:56.601710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75064 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 75064 ']' 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 75064 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75064 00:07:45.950 killing process with pid 75064 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75064' 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 75064 00:07:45.950 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 75064 00:07:45.950 [2024-12-14 04:56:56.633649] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:45.950 [2024-12-14 04:56:56.664832] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:46.210 04:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:46.210 00:07:46.210 real 0m8.424s 00:07:46.210 user 0m14.445s 00:07:46.210 sys 0m1.598s 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.211 ************************************ 00:07:46.211 END TEST raid_state_function_test 00:07:46.211 ************************************ 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.211 04:56:56 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:07:46.211 04:56:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:46.211 04:56:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.211 04:56:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:46.211 ************************************ 00:07:46.211 START TEST raid_state_function_test_sb 00:07:46.211 ************************************ 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:46.211 04:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:46.211 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75663 00:07:46.211 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:46.211 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75663' 00:07:46.211 Process raid pid: 75663 00:07:46.211 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75663 00:07:46.211 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75663 ']' 00:07:46.211 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.211 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.211 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.211 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.211 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.211 [2024-12-14 04:56:57.079212] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:46.211 [2024-12-14 04:56:57.079412] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.471 [2024-12-14 04:56:57.241659] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.471 [2024-12-14 04:56:57.287557] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.471 [2024-12-14 04:56:57.329511] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.471 [2024-12-14 04:56:57.329543] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.040 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.040 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:47.040 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:47.040 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.040 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.040 [2024-12-14 04:56:57.907380] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:47.040 [2024-12-14 04:56:57.907495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:47.040 [2024-12-14 04:56:57.907514] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:47.040 [2024-12-14 04:56:57.907524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:47.040 [2024-12-14 04:56:57.907530] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:47.040 [2024-12-14 04:56:57.907542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:47.040 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.040 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:47.040 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.040 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.040 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.040 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.040 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:47.040 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.040 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.040 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.040 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.040 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.040 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.040 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.040 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.299 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.299 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.299 "name": "Existed_Raid", 00:07:47.299 "uuid": "70664ff5-6037-4e33-8e75-a45d851352c3", 00:07:47.299 "strip_size_kb": 64, 00:07:47.299 "state": "configuring", 00:07:47.299 "raid_level": "raid0", 00:07:47.299 "superblock": true, 00:07:47.299 "num_base_bdevs": 3, 00:07:47.299 "num_base_bdevs_discovered": 0, 00:07:47.299 "num_base_bdevs_operational": 3, 00:07:47.299 "base_bdevs_list": [ 00:07:47.299 { 00:07:47.299 "name": "BaseBdev1", 00:07:47.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.299 "is_configured": false, 00:07:47.299 "data_offset": 0, 00:07:47.299 "data_size": 0 00:07:47.299 }, 00:07:47.299 { 00:07:47.299 "name": "BaseBdev2", 00:07:47.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.299 "is_configured": false, 00:07:47.299 "data_offset": 0, 00:07:47.299 "data_size": 0 00:07:47.299 }, 00:07:47.299 { 00:07:47.299 "name": "BaseBdev3", 00:07:47.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.299 "is_configured": false, 00:07:47.299 "data_offset": 0, 00:07:47.299 "data_size": 0 00:07:47.299 } 00:07:47.299 ] 00:07:47.299 }' 00:07:47.299 04:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.299 04:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.559 [2024-12-14 04:56:58.343298] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:47.559 [2024-12-14 04:56:58.343388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.559 [2024-12-14 04:56:58.355314] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:47.559 [2024-12-14 04:56:58.355405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:47.559 [2024-12-14 04:56:58.355433] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:47.559 [2024-12-14 04:56:58.355457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:47.559 [2024-12-14 04:56:58.355475] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:47.559 [2024-12-14 04:56:58.355496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.559 [2024-12-14 04:56:58.376166] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:47.559 BaseBdev1 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.559 [ 00:07:47.559 { 00:07:47.559 "name": "BaseBdev1", 00:07:47.559 "aliases": [ 00:07:47.559 "d4921d6a-c03a-45fe-92f3-95fec7db880e" 00:07:47.559 ], 00:07:47.559 "product_name": "Malloc disk", 00:07:47.559 "block_size": 512, 00:07:47.559 "num_blocks": 65536, 00:07:47.559 "uuid": "d4921d6a-c03a-45fe-92f3-95fec7db880e", 00:07:47.559 "assigned_rate_limits": { 00:07:47.559 "rw_ios_per_sec": 0, 00:07:47.559 "rw_mbytes_per_sec": 0, 00:07:47.559 "r_mbytes_per_sec": 0, 00:07:47.559 "w_mbytes_per_sec": 0 00:07:47.559 }, 00:07:47.559 "claimed": true, 00:07:47.559 "claim_type": "exclusive_write", 00:07:47.559 "zoned": false, 00:07:47.559 "supported_io_types": { 00:07:47.559 "read": true, 00:07:47.559 "write": true, 00:07:47.559 "unmap": true, 00:07:47.559 "flush": true, 00:07:47.559 "reset": true, 00:07:47.559 "nvme_admin": false, 00:07:47.559 "nvme_io": false, 00:07:47.559 "nvme_io_md": false, 00:07:47.559 "write_zeroes": true, 00:07:47.559 "zcopy": true, 00:07:47.559 "get_zone_info": false, 00:07:47.559 "zone_management": false, 00:07:47.559 "zone_append": false, 00:07:47.559 "compare": false, 00:07:47.559 "compare_and_write": false, 00:07:47.559 "abort": true, 00:07:47.559 "seek_hole": false, 00:07:47.559 "seek_data": false, 00:07:47.559 "copy": true, 00:07:47.559 "nvme_iov_md": false 00:07:47.559 }, 00:07:47.559 "memory_domains": [ 00:07:47.559 { 00:07:47.559 "dma_device_id": "system", 00:07:47.559 "dma_device_type": 1 00:07:47.559 }, 00:07:47.559 { 00:07:47.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.559 "dma_device_type": 2 00:07:47.559 } 00:07:47.559 ], 00:07:47.559 "driver_specific": {} 00:07:47.559 } 00:07:47.559 ] 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.559 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.818 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.819 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.819 "name": "Existed_Raid", 00:07:47.819 "uuid": "b1fa3acb-d740-4719-8125-5edb4acd38b2", 00:07:47.819 "strip_size_kb": 64, 00:07:47.819 "state": "configuring", 00:07:47.819 "raid_level": "raid0", 00:07:47.819 "superblock": true, 00:07:47.819 "num_base_bdevs": 3, 00:07:47.819 "num_base_bdevs_discovered": 1, 00:07:47.819 "num_base_bdevs_operational": 3, 00:07:47.819 "base_bdevs_list": [ 00:07:47.819 { 00:07:47.819 "name": "BaseBdev1", 00:07:47.819 "uuid": "d4921d6a-c03a-45fe-92f3-95fec7db880e", 00:07:47.819 "is_configured": true, 00:07:47.819 "data_offset": 2048, 00:07:47.819 "data_size": 63488 00:07:47.819 }, 00:07:47.819 { 00:07:47.819 "name": "BaseBdev2", 00:07:47.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.819 "is_configured": false, 00:07:47.819 "data_offset": 0, 00:07:47.819 "data_size": 0 00:07:47.819 }, 00:07:47.819 { 00:07:47.819 "name": "BaseBdev3", 00:07:47.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.819 "is_configured": false, 00:07:47.819 "data_offset": 0, 00:07:47.819 "data_size": 0 00:07:47.819 } 00:07:47.819 ] 00:07:47.819 }' 00:07:47.819 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.819 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.076 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:48.076 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.076 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.076 [2024-12-14 04:56:58.807494] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:48.076 [2024-12-14 04:56:58.807604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:48.076 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.076 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:48.076 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.076 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.076 [2024-12-14 04:56:58.815495] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:48.076 [2024-12-14 04:56:58.817335] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:48.076 [2024-12-14 04:56:58.817376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:48.076 [2024-12-14 04:56:58.817386] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:48.076 [2024-12-14 04:56:58.817395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:48.076 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.076 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:48.076 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:48.076 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:48.076 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.076 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:48.076 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:48.076 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.077 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:48.077 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.077 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.077 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.077 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.077 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.077 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.077 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.077 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.077 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.077 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.077 "name": "Existed_Raid", 00:07:48.077 "uuid": "9134c085-2b41-40e0-a6ab-8f589f4f6adc", 00:07:48.077 "strip_size_kb": 64, 00:07:48.077 "state": "configuring", 00:07:48.077 "raid_level": "raid0", 00:07:48.077 "superblock": true, 00:07:48.077 "num_base_bdevs": 3, 00:07:48.077 "num_base_bdevs_discovered": 1, 00:07:48.077 "num_base_bdevs_operational": 3, 00:07:48.077 "base_bdevs_list": [ 00:07:48.077 { 00:07:48.077 "name": "BaseBdev1", 00:07:48.077 "uuid": "d4921d6a-c03a-45fe-92f3-95fec7db880e", 00:07:48.077 "is_configured": true, 00:07:48.077 "data_offset": 2048, 00:07:48.077 "data_size": 63488 00:07:48.077 }, 00:07:48.077 { 00:07:48.077 "name": "BaseBdev2", 00:07:48.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.077 "is_configured": false, 00:07:48.077 "data_offset": 0, 00:07:48.077 "data_size": 0 00:07:48.077 }, 00:07:48.077 { 00:07:48.077 "name": "BaseBdev3", 00:07:48.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.077 "is_configured": false, 00:07:48.077 "data_offset": 0, 00:07:48.077 "data_size": 0 00:07:48.077 } 00:07:48.077 ] 00:07:48.077 }' 00:07:48.077 04:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.077 04:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.645 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:48.645 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.645 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.645 [2024-12-14 04:56:59.256843] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:48.645 BaseBdev2 00:07:48.645 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.645 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:48.645 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:48.645 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:48.645 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:48.645 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:48.645 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:48.645 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:48.645 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.645 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.645 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.645 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:48.645 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.645 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.645 [ 00:07:48.645 { 00:07:48.645 "name": "BaseBdev2", 00:07:48.645 "aliases": [ 00:07:48.645 "06c29c13-7383-4e8f-a864-dfde6f5079fa" 00:07:48.645 ], 00:07:48.645 "product_name": "Malloc disk", 00:07:48.645 "block_size": 512, 00:07:48.645 "num_blocks": 65536, 00:07:48.645 "uuid": "06c29c13-7383-4e8f-a864-dfde6f5079fa", 00:07:48.645 "assigned_rate_limits": { 00:07:48.645 "rw_ios_per_sec": 0, 00:07:48.645 "rw_mbytes_per_sec": 0, 00:07:48.645 "r_mbytes_per_sec": 0, 00:07:48.645 "w_mbytes_per_sec": 0 00:07:48.645 }, 00:07:48.645 "claimed": true, 00:07:48.645 "claim_type": "exclusive_write", 00:07:48.645 "zoned": false, 00:07:48.645 "supported_io_types": { 00:07:48.645 "read": true, 00:07:48.645 "write": true, 00:07:48.645 "unmap": true, 00:07:48.645 "flush": true, 00:07:48.645 "reset": true, 00:07:48.645 "nvme_admin": false, 00:07:48.645 "nvme_io": false, 00:07:48.645 "nvme_io_md": false, 00:07:48.645 "write_zeroes": true, 00:07:48.645 "zcopy": true, 00:07:48.645 "get_zone_info": false, 00:07:48.645 "zone_management": false, 00:07:48.645 "zone_append": false, 00:07:48.645 "compare": false, 00:07:48.645 "compare_and_write": false, 00:07:48.645 "abort": true, 00:07:48.646 "seek_hole": false, 00:07:48.646 "seek_data": false, 00:07:48.646 "copy": true, 00:07:48.646 "nvme_iov_md": false 00:07:48.646 }, 00:07:48.646 "memory_domains": [ 00:07:48.646 { 00:07:48.646 "dma_device_id": "system", 00:07:48.646 "dma_device_type": 1 00:07:48.646 }, 00:07:48.646 { 00:07:48.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.646 "dma_device_type": 2 00:07:48.646 } 00:07:48.646 ], 00:07:48.646 "driver_specific": {} 00:07:48.646 } 00:07:48.646 ] 00:07:48.646 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.646 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:48.646 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:48.646 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:48.646 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:48.646 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.646 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:48.646 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:48.646 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.646 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:48.646 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.646 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.646 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.646 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.646 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.646 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.646 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.646 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.646 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.646 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.646 "name": "Existed_Raid", 00:07:48.646 "uuid": "9134c085-2b41-40e0-a6ab-8f589f4f6adc", 00:07:48.646 "strip_size_kb": 64, 00:07:48.646 "state": "configuring", 00:07:48.646 "raid_level": "raid0", 00:07:48.646 "superblock": true, 00:07:48.646 "num_base_bdevs": 3, 00:07:48.646 "num_base_bdevs_discovered": 2, 00:07:48.646 "num_base_bdevs_operational": 3, 00:07:48.646 "base_bdevs_list": [ 00:07:48.646 { 00:07:48.646 "name": "BaseBdev1", 00:07:48.646 "uuid": "d4921d6a-c03a-45fe-92f3-95fec7db880e", 00:07:48.646 "is_configured": true, 00:07:48.646 "data_offset": 2048, 00:07:48.646 "data_size": 63488 00:07:48.646 }, 00:07:48.646 { 00:07:48.646 "name": "BaseBdev2", 00:07:48.646 "uuid": "06c29c13-7383-4e8f-a864-dfde6f5079fa", 00:07:48.646 "is_configured": true, 00:07:48.646 "data_offset": 2048, 00:07:48.646 "data_size": 63488 00:07:48.646 }, 00:07:48.646 { 00:07:48.646 "name": "BaseBdev3", 00:07:48.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.646 "is_configured": false, 00:07:48.646 "data_offset": 0, 00:07:48.646 "data_size": 0 00:07:48.646 } 00:07:48.646 ] 00:07:48.646 }' 00:07:48.646 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.646 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.906 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:48.906 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.906 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.906 [2024-12-14 04:56:59.675099] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:48.906 [2024-12-14 04:56:59.675327] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:48.906 [2024-12-14 04:56:59.675351] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:48.906 BaseBdev3 00:07:48.906 [2024-12-14 04:56:59.675639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:48.906 [2024-12-14 04:56:59.675763] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:48.906 [2024-12-14 04:56:59.675778] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:48.906 [2024-12-14 04:56:59.675897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.906 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.906 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:48.906 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:48.906 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:48.906 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:48.906 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:48.906 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:48.906 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.907 [ 00:07:48.907 { 00:07:48.907 "name": "BaseBdev3", 00:07:48.907 "aliases": [ 00:07:48.907 "469919ca-c383-4983-b023-e72d36db173b" 00:07:48.907 ], 00:07:48.907 "product_name": "Malloc disk", 00:07:48.907 "block_size": 512, 00:07:48.907 "num_blocks": 65536, 00:07:48.907 "uuid": "469919ca-c383-4983-b023-e72d36db173b", 00:07:48.907 "assigned_rate_limits": { 00:07:48.907 "rw_ios_per_sec": 0, 00:07:48.907 "rw_mbytes_per_sec": 0, 00:07:48.907 "r_mbytes_per_sec": 0, 00:07:48.907 "w_mbytes_per_sec": 0 00:07:48.907 }, 00:07:48.907 "claimed": true, 00:07:48.907 "claim_type": "exclusive_write", 00:07:48.907 "zoned": false, 00:07:48.907 "supported_io_types": { 00:07:48.907 "read": true, 00:07:48.907 "write": true, 00:07:48.907 "unmap": true, 00:07:48.907 "flush": true, 00:07:48.907 "reset": true, 00:07:48.907 "nvme_admin": false, 00:07:48.907 "nvme_io": false, 00:07:48.907 "nvme_io_md": false, 00:07:48.907 "write_zeroes": true, 00:07:48.907 "zcopy": true, 00:07:48.907 "get_zone_info": false, 00:07:48.907 "zone_management": false, 00:07:48.907 "zone_append": false, 00:07:48.907 "compare": false, 00:07:48.907 "compare_and_write": false, 00:07:48.907 "abort": true, 00:07:48.907 "seek_hole": false, 00:07:48.907 "seek_data": false, 00:07:48.907 "copy": true, 00:07:48.907 "nvme_iov_md": false 00:07:48.907 }, 00:07:48.907 "memory_domains": [ 00:07:48.907 { 00:07:48.907 "dma_device_id": "system", 00:07:48.907 "dma_device_type": 1 00:07:48.907 }, 00:07:48.907 { 00:07:48.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.907 "dma_device_type": 2 00:07:48.907 } 00:07:48.907 ], 00:07:48.907 "driver_specific": {} 00:07:48.907 } 00:07:48.907 ] 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.907 "name": "Existed_Raid", 00:07:48.907 "uuid": "9134c085-2b41-40e0-a6ab-8f589f4f6adc", 00:07:48.907 "strip_size_kb": 64, 00:07:48.907 "state": "online", 00:07:48.907 "raid_level": "raid0", 00:07:48.907 "superblock": true, 00:07:48.907 "num_base_bdevs": 3, 00:07:48.907 "num_base_bdevs_discovered": 3, 00:07:48.907 "num_base_bdevs_operational": 3, 00:07:48.907 "base_bdevs_list": [ 00:07:48.907 { 00:07:48.907 "name": "BaseBdev1", 00:07:48.907 "uuid": "d4921d6a-c03a-45fe-92f3-95fec7db880e", 00:07:48.907 "is_configured": true, 00:07:48.907 "data_offset": 2048, 00:07:48.907 "data_size": 63488 00:07:48.907 }, 00:07:48.907 { 00:07:48.907 "name": "BaseBdev2", 00:07:48.907 "uuid": "06c29c13-7383-4e8f-a864-dfde6f5079fa", 00:07:48.907 "is_configured": true, 00:07:48.907 "data_offset": 2048, 00:07:48.907 "data_size": 63488 00:07:48.907 }, 00:07:48.907 { 00:07:48.907 "name": "BaseBdev3", 00:07:48.907 "uuid": "469919ca-c383-4983-b023-e72d36db173b", 00:07:48.907 "is_configured": true, 00:07:48.907 "data_offset": 2048, 00:07:48.907 "data_size": 63488 00:07:48.907 } 00:07:48.907 ] 00:07:48.907 }' 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.907 04:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.475 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:49.475 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:49.475 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:49.475 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:49.475 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:49.475 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:49.475 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:49.475 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:49.475 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.475 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.475 [2024-12-14 04:57:00.170578] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.475 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.475 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:49.475 "name": "Existed_Raid", 00:07:49.475 "aliases": [ 00:07:49.475 "9134c085-2b41-40e0-a6ab-8f589f4f6adc" 00:07:49.475 ], 00:07:49.475 "product_name": "Raid Volume", 00:07:49.475 "block_size": 512, 00:07:49.475 "num_blocks": 190464, 00:07:49.475 "uuid": "9134c085-2b41-40e0-a6ab-8f589f4f6adc", 00:07:49.475 "assigned_rate_limits": { 00:07:49.475 "rw_ios_per_sec": 0, 00:07:49.475 "rw_mbytes_per_sec": 0, 00:07:49.475 "r_mbytes_per_sec": 0, 00:07:49.475 "w_mbytes_per_sec": 0 00:07:49.475 }, 00:07:49.475 "claimed": false, 00:07:49.475 "zoned": false, 00:07:49.475 "supported_io_types": { 00:07:49.475 "read": true, 00:07:49.475 "write": true, 00:07:49.475 "unmap": true, 00:07:49.475 "flush": true, 00:07:49.475 "reset": true, 00:07:49.475 "nvme_admin": false, 00:07:49.475 "nvme_io": false, 00:07:49.475 "nvme_io_md": false, 00:07:49.475 "write_zeroes": true, 00:07:49.475 "zcopy": false, 00:07:49.475 "get_zone_info": false, 00:07:49.475 "zone_management": false, 00:07:49.475 "zone_append": false, 00:07:49.475 "compare": false, 00:07:49.475 "compare_and_write": false, 00:07:49.475 "abort": false, 00:07:49.475 "seek_hole": false, 00:07:49.475 "seek_data": false, 00:07:49.475 "copy": false, 00:07:49.475 "nvme_iov_md": false 00:07:49.475 }, 00:07:49.475 "memory_domains": [ 00:07:49.475 { 00:07:49.475 "dma_device_id": "system", 00:07:49.476 "dma_device_type": 1 00:07:49.476 }, 00:07:49.476 { 00:07:49.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.476 "dma_device_type": 2 00:07:49.476 }, 00:07:49.476 { 00:07:49.476 "dma_device_id": "system", 00:07:49.476 "dma_device_type": 1 00:07:49.476 }, 00:07:49.476 { 00:07:49.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.476 "dma_device_type": 2 00:07:49.476 }, 00:07:49.476 { 00:07:49.476 "dma_device_id": "system", 00:07:49.476 "dma_device_type": 1 00:07:49.476 }, 00:07:49.476 { 00:07:49.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.476 "dma_device_type": 2 00:07:49.476 } 00:07:49.476 ], 00:07:49.476 "driver_specific": { 00:07:49.476 "raid": { 00:07:49.476 "uuid": "9134c085-2b41-40e0-a6ab-8f589f4f6adc", 00:07:49.476 "strip_size_kb": 64, 00:07:49.476 "state": "online", 00:07:49.476 "raid_level": "raid0", 00:07:49.476 "superblock": true, 00:07:49.476 "num_base_bdevs": 3, 00:07:49.476 "num_base_bdevs_discovered": 3, 00:07:49.476 "num_base_bdevs_operational": 3, 00:07:49.476 "base_bdevs_list": [ 00:07:49.476 { 00:07:49.476 "name": "BaseBdev1", 00:07:49.476 "uuid": "d4921d6a-c03a-45fe-92f3-95fec7db880e", 00:07:49.476 "is_configured": true, 00:07:49.476 "data_offset": 2048, 00:07:49.476 "data_size": 63488 00:07:49.476 }, 00:07:49.476 { 00:07:49.476 "name": "BaseBdev2", 00:07:49.476 "uuid": "06c29c13-7383-4e8f-a864-dfde6f5079fa", 00:07:49.476 "is_configured": true, 00:07:49.476 "data_offset": 2048, 00:07:49.476 "data_size": 63488 00:07:49.476 }, 00:07:49.476 { 00:07:49.476 "name": "BaseBdev3", 00:07:49.476 "uuid": "469919ca-c383-4983-b023-e72d36db173b", 00:07:49.476 "is_configured": true, 00:07:49.476 "data_offset": 2048, 00:07:49.476 "data_size": 63488 00:07:49.476 } 00:07:49.476 ] 00:07:49.476 } 00:07:49.476 } 00:07:49.476 }' 00:07:49.476 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:49.476 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:49.476 BaseBdev2 00:07:49.476 BaseBdev3' 00:07:49.476 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.476 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:49.476 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:49.476 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:49.476 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.476 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.476 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.476 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.476 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:49.476 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:49.476 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:49.476 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:49.476 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.476 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.476 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.735 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.735 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:49.735 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:49.735 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.736 [2024-12-14 04:57:00.421918] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:49.736 [2024-12-14 04:57:00.421988] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:49.736 [2024-12-14 04:57:00.422074] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.736 "name": "Existed_Raid", 00:07:49.736 "uuid": "9134c085-2b41-40e0-a6ab-8f589f4f6adc", 00:07:49.736 "strip_size_kb": 64, 00:07:49.736 "state": "offline", 00:07:49.736 "raid_level": "raid0", 00:07:49.736 "superblock": true, 00:07:49.736 "num_base_bdevs": 3, 00:07:49.736 "num_base_bdevs_discovered": 2, 00:07:49.736 "num_base_bdevs_operational": 2, 00:07:49.736 "base_bdevs_list": [ 00:07:49.736 { 00:07:49.736 "name": null, 00:07:49.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.736 "is_configured": false, 00:07:49.736 "data_offset": 0, 00:07:49.736 "data_size": 63488 00:07:49.736 }, 00:07:49.736 { 00:07:49.736 "name": "BaseBdev2", 00:07:49.736 "uuid": "06c29c13-7383-4e8f-a864-dfde6f5079fa", 00:07:49.736 "is_configured": true, 00:07:49.736 "data_offset": 2048, 00:07:49.736 "data_size": 63488 00:07:49.736 }, 00:07:49.736 { 00:07:49.736 "name": "BaseBdev3", 00:07:49.736 "uuid": "469919ca-c383-4983-b023-e72d36db173b", 00:07:49.736 "is_configured": true, 00:07:49.736 "data_offset": 2048, 00:07:49.736 "data_size": 63488 00:07:49.736 } 00:07:49.736 ] 00:07:49.736 }' 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.736 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.305 [2024-12-14 04:57:00.916536] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.305 [2024-12-14 04:57:00.983642] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:50.305 [2024-12-14 04:57:00.983731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.305 04:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.305 BaseBdev2 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.305 [ 00:07:50.305 { 00:07:50.305 "name": "BaseBdev2", 00:07:50.305 "aliases": [ 00:07:50.305 "8fc05fc8-5e92-462f-9440-6f373faa34c2" 00:07:50.305 ], 00:07:50.305 "product_name": "Malloc disk", 00:07:50.305 "block_size": 512, 00:07:50.305 "num_blocks": 65536, 00:07:50.305 "uuid": "8fc05fc8-5e92-462f-9440-6f373faa34c2", 00:07:50.305 "assigned_rate_limits": { 00:07:50.305 "rw_ios_per_sec": 0, 00:07:50.305 "rw_mbytes_per_sec": 0, 00:07:50.305 "r_mbytes_per_sec": 0, 00:07:50.305 "w_mbytes_per_sec": 0 00:07:50.305 }, 00:07:50.305 "claimed": false, 00:07:50.305 "zoned": false, 00:07:50.305 "supported_io_types": { 00:07:50.305 "read": true, 00:07:50.305 "write": true, 00:07:50.305 "unmap": true, 00:07:50.305 "flush": true, 00:07:50.305 "reset": true, 00:07:50.305 "nvme_admin": false, 00:07:50.305 "nvme_io": false, 00:07:50.305 "nvme_io_md": false, 00:07:50.305 "write_zeroes": true, 00:07:50.305 "zcopy": true, 00:07:50.305 "get_zone_info": false, 00:07:50.305 "zone_management": false, 00:07:50.305 "zone_append": false, 00:07:50.305 "compare": false, 00:07:50.305 "compare_and_write": false, 00:07:50.305 "abort": true, 00:07:50.305 "seek_hole": false, 00:07:50.305 "seek_data": false, 00:07:50.305 "copy": true, 00:07:50.305 "nvme_iov_md": false 00:07:50.305 }, 00:07:50.305 "memory_domains": [ 00:07:50.305 { 00:07:50.305 "dma_device_id": "system", 00:07:50.305 "dma_device_type": 1 00:07:50.305 }, 00:07:50.305 { 00:07:50.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.305 "dma_device_type": 2 00:07:50.305 } 00:07:50.305 ], 00:07:50.305 "driver_specific": {} 00:07:50.305 } 00:07:50.305 ] 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.305 BaseBdev3 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:50.305 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.306 [ 00:07:50.306 { 00:07:50.306 "name": "BaseBdev3", 00:07:50.306 "aliases": [ 00:07:50.306 "87080c98-2fff-4628-940d-1a2077681b8f" 00:07:50.306 ], 00:07:50.306 "product_name": "Malloc disk", 00:07:50.306 "block_size": 512, 00:07:50.306 "num_blocks": 65536, 00:07:50.306 "uuid": "87080c98-2fff-4628-940d-1a2077681b8f", 00:07:50.306 "assigned_rate_limits": { 00:07:50.306 "rw_ios_per_sec": 0, 00:07:50.306 "rw_mbytes_per_sec": 0, 00:07:50.306 "r_mbytes_per_sec": 0, 00:07:50.306 "w_mbytes_per_sec": 0 00:07:50.306 }, 00:07:50.306 "claimed": false, 00:07:50.306 "zoned": false, 00:07:50.306 "supported_io_types": { 00:07:50.306 "read": true, 00:07:50.306 "write": true, 00:07:50.306 "unmap": true, 00:07:50.306 "flush": true, 00:07:50.306 "reset": true, 00:07:50.306 "nvme_admin": false, 00:07:50.306 "nvme_io": false, 00:07:50.306 "nvme_io_md": false, 00:07:50.306 "write_zeroes": true, 00:07:50.306 "zcopy": true, 00:07:50.306 "get_zone_info": false, 00:07:50.306 "zone_management": false, 00:07:50.306 "zone_append": false, 00:07:50.306 "compare": false, 00:07:50.306 "compare_and_write": false, 00:07:50.306 "abort": true, 00:07:50.306 "seek_hole": false, 00:07:50.306 "seek_data": false, 00:07:50.306 "copy": true, 00:07:50.306 "nvme_iov_md": false 00:07:50.306 }, 00:07:50.306 "memory_domains": [ 00:07:50.306 { 00:07:50.306 "dma_device_id": "system", 00:07:50.306 "dma_device_type": 1 00:07:50.306 }, 00:07:50.306 { 00:07:50.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.306 "dma_device_type": 2 00:07:50.306 } 00:07:50.306 ], 00:07:50.306 "driver_specific": {} 00:07:50.306 } 00:07:50.306 ] 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.306 [2024-12-14 04:57:01.158591] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.306 [2024-12-14 04:57:01.158678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.306 [2024-12-14 04:57:01.158720] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:50.306 [2024-12-14 04:57:01.160571] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.306 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.566 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.566 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.566 "name": "Existed_Raid", 00:07:50.566 "uuid": "9f7be38e-e675-4b78-884a-f86ed21122dc", 00:07:50.566 "strip_size_kb": 64, 00:07:50.566 "state": "configuring", 00:07:50.566 "raid_level": "raid0", 00:07:50.566 "superblock": true, 00:07:50.566 "num_base_bdevs": 3, 00:07:50.566 "num_base_bdevs_discovered": 2, 00:07:50.566 "num_base_bdevs_operational": 3, 00:07:50.566 "base_bdevs_list": [ 00:07:50.566 { 00:07:50.566 "name": "BaseBdev1", 00:07:50.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.566 "is_configured": false, 00:07:50.566 "data_offset": 0, 00:07:50.566 "data_size": 0 00:07:50.566 }, 00:07:50.566 { 00:07:50.566 "name": "BaseBdev2", 00:07:50.566 "uuid": "8fc05fc8-5e92-462f-9440-6f373faa34c2", 00:07:50.566 "is_configured": true, 00:07:50.566 "data_offset": 2048, 00:07:50.566 "data_size": 63488 00:07:50.566 }, 00:07:50.566 { 00:07:50.566 "name": "BaseBdev3", 00:07:50.566 "uuid": "87080c98-2fff-4628-940d-1a2077681b8f", 00:07:50.566 "is_configured": true, 00:07:50.566 "data_offset": 2048, 00:07:50.566 "data_size": 63488 00:07:50.566 } 00:07:50.566 ] 00:07:50.566 }' 00:07:50.566 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.566 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.825 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:50.825 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.825 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.825 [2024-12-14 04:57:01.649733] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:50.825 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.825 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:50.825 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.825 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.825 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.825 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.825 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:50.825 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.825 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.825 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.825 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.825 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.825 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.825 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.825 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.825 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.825 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.825 "name": "Existed_Raid", 00:07:50.825 "uuid": "9f7be38e-e675-4b78-884a-f86ed21122dc", 00:07:50.825 "strip_size_kb": 64, 00:07:50.825 "state": "configuring", 00:07:50.825 "raid_level": "raid0", 00:07:50.825 "superblock": true, 00:07:50.825 "num_base_bdevs": 3, 00:07:50.825 "num_base_bdevs_discovered": 1, 00:07:50.825 "num_base_bdevs_operational": 3, 00:07:50.825 "base_bdevs_list": [ 00:07:50.825 { 00:07:50.825 "name": "BaseBdev1", 00:07:50.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.825 "is_configured": false, 00:07:50.825 "data_offset": 0, 00:07:50.825 "data_size": 0 00:07:50.825 }, 00:07:50.825 { 00:07:50.825 "name": null, 00:07:50.825 "uuid": "8fc05fc8-5e92-462f-9440-6f373faa34c2", 00:07:50.825 "is_configured": false, 00:07:50.825 "data_offset": 0, 00:07:50.825 "data_size": 63488 00:07:50.825 }, 00:07:50.825 { 00:07:50.825 "name": "BaseBdev3", 00:07:50.825 "uuid": "87080c98-2fff-4628-940d-1a2077681b8f", 00:07:50.825 "is_configured": true, 00:07:50.825 "data_offset": 2048, 00:07:50.825 "data_size": 63488 00:07:50.825 } 00:07:50.825 ] 00:07:50.825 }' 00:07:50.825 04:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.825 04:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.394 [2024-12-14 04:57:02.036106] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.394 BaseBdev1 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.394 [ 00:07:51.394 { 00:07:51.394 "name": "BaseBdev1", 00:07:51.394 "aliases": [ 00:07:51.394 "ef44ff50-eaf1-470f-b2f8-36771e96cb81" 00:07:51.394 ], 00:07:51.394 "product_name": "Malloc disk", 00:07:51.394 "block_size": 512, 00:07:51.394 "num_blocks": 65536, 00:07:51.394 "uuid": "ef44ff50-eaf1-470f-b2f8-36771e96cb81", 00:07:51.394 "assigned_rate_limits": { 00:07:51.394 "rw_ios_per_sec": 0, 00:07:51.394 "rw_mbytes_per_sec": 0, 00:07:51.394 "r_mbytes_per_sec": 0, 00:07:51.394 "w_mbytes_per_sec": 0 00:07:51.394 }, 00:07:51.394 "claimed": true, 00:07:51.394 "claim_type": "exclusive_write", 00:07:51.394 "zoned": false, 00:07:51.394 "supported_io_types": { 00:07:51.394 "read": true, 00:07:51.394 "write": true, 00:07:51.394 "unmap": true, 00:07:51.394 "flush": true, 00:07:51.394 "reset": true, 00:07:51.394 "nvme_admin": false, 00:07:51.394 "nvme_io": false, 00:07:51.394 "nvme_io_md": false, 00:07:51.394 "write_zeroes": true, 00:07:51.394 "zcopy": true, 00:07:51.394 "get_zone_info": false, 00:07:51.394 "zone_management": false, 00:07:51.394 "zone_append": false, 00:07:51.394 "compare": false, 00:07:51.394 "compare_and_write": false, 00:07:51.394 "abort": true, 00:07:51.394 "seek_hole": false, 00:07:51.394 "seek_data": false, 00:07:51.394 "copy": true, 00:07:51.394 "nvme_iov_md": false 00:07:51.394 }, 00:07:51.394 "memory_domains": [ 00:07:51.394 { 00:07:51.394 "dma_device_id": "system", 00:07:51.394 "dma_device_type": 1 00:07:51.394 }, 00:07:51.394 { 00:07:51.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.394 "dma_device_type": 2 00:07:51.394 } 00:07:51.394 ], 00:07:51.394 "driver_specific": {} 00:07:51.394 } 00:07:51.394 ] 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.394 "name": "Existed_Raid", 00:07:51.394 "uuid": "9f7be38e-e675-4b78-884a-f86ed21122dc", 00:07:51.394 "strip_size_kb": 64, 00:07:51.394 "state": "configuring", 00:07:51.394 "raid_level": "raid0", 00:07:51.394 "superblock": true, 00:07:51.394 "num_base_bdevs": 3, 00:07:51.394 "num_base_bdevs_discovered": 2, 00:07:51.394 "num_base_bdevs_operational": 3, 00:07:51.394 "base_bdevs_list": [ 00:07:51.394 { 00:07:51.394 "name": "BaseBdev1", 00:07:51.394 "uuid": "ef44ff50-eaf1-470f-b2f8-36771e96cb81", 00:07:51.394 "is_configured": true, 00:07:51.394 "data_offset": 2048, 00:07:51.394 "data_size": 63488 00:07:51.394 }, 00:07:51.394 { 00:07:51.394 "name": null, 00:07:51.394 "uuid": "8fc05fc8-5e92-462f-9440-6f373faa34c2", 00:07:51.394 "is_configured": false, 00:07:51.394 "data_offset": 0, 00:07:51.394 "data_size": 63488 00:07:51.394 }, 00:07:51.394 { 00:07:51.394 "name": "BaseBdev3", 00:07:51.394 "uuid": "87080c98-2fff-4628-940d-1a2077681b8f", 00:07:51.394 "is_configured": true, 00:07:51.394 "data_offset": 2048, 00:07:51.394 "data_size": 63488 00:07:51.394 } 00:07:51.394 ] 00:07:51.394 }' 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.394 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.654 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.654 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.654 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.654 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:51.654 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.654 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:51.654 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:51.654 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.654 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.913 [2024-12-14 04:57:02.539308] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:51.913 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.913 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:51.913 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.913 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.913 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.913 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.913 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:51.913 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.913 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.913 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.913 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.913 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.913 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.913 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.913 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.913 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.913 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.913 "name": "Existed_Raid", 00:07:51.913 "uuid": "9f7be38e-e675-4b78-884a-f86ed21122dc", 00:07:51.913 "strip_size_kb": 64, 00:07:51.913 "state": "configuring", 00:07:51.913 "raid_level": "raid0", 00:07:51.913 "superblock": true, 00:07:51.913 "num_base_bdevs": 3, 00:07:51.913 "num_base_bdevs_discovered": 1, 00:07:51.913 "num_base_bdevs_operational": 3, 00:07:51.913 "base_bdevs_list": [ 00:07:51.913 { 00:07:51.913 "name": "BaseBdev1", 00:07:51.913 "uuid": "ef44ff50-eaf1-470f-b2f8-36771e96cb81", 00:07:51.913 "is_configured": true, 00:07:51.913 "data_offset": 2048, 00:07:51.913 "data_size": 63488 00:07:51.913 }, 00:07:51.913 { 00:07:51.913 "name": null, 00:07:51.913 "uuid": "8fc05fc8-5e92-462f-9440-6f373faa34c2", 00:07:51.913 "is_configured": false, 00:07:51.913 "data_offset": 0, 00:07:51.913 "data_size": 63488 00:07:51.913 }, 00:07:51.913 { 00:07:51.913 "name": null, 00:07:51.913 "uuid": "87080c98-2fff-4628-940d-1a2077681b8f", 00:07:51.913 "is_configured": false, 00:07:51.913 "data_offset": 0, 00:07:51.913 "data_size": 63488 00:07:51.913 } 00:07:51.913 ] 00:07:51.913 }' 00:07:51.913 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.913 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.172 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.172 04:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:52.172 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.172 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.172 04:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.172 [2024-12-14 04:57:03.007309] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.172 "name": "Existed_Raid", 00:07:52.172 "uuid": "9f7be38e-e675-4b78-884a-f86ed21122dc", 00:07:52.172 "strip_size_kb": 64, 00:07:52.172 "state": "configuring", 00:07:52.172 "raid_level": "raid0", 00:07:52.172 "superblock": true, 00:07:52.172 "num_base_bdevs": 3, 00:07:52.172 "num_base_bdevs_discovered": 2, 00:07:52.172 "num_base_bdevs_operational": 3, 00:07:52.172 "base_bdevs_list": [ 00:07:52.172 { 00:07:52.172 "name": "BaseBdev1", 00:07:52.172 "uuid": "ef44ff50-eaf1-470f-b2f8-36771e96cb81", 00:07:52.172 "is_configured": true, 00:07:52.172 "data_offset": 2048, 00:07:52.172 "data_size": 63488 00:07:52.172 }, 00:07:52.172 { 00:07:52.172 "name": null, 00:07:52.172 "uuid": "8fc05fc8-5e92-462f-9440-6f373faa34c2", 00:07:52.172 "is_configured": false, 00:07:52.172 "data_offset": 0, 00:07:52.172 "data_size": 63488 00:07:52.172 }, 00:07:52.172 { 00:07:52.172 "name": "BaseBdev3", 00:07:52.172 "uuid": "87080c98-2fff-4628-940d-1a2077681b8f", 00:07:52.172 "is_configured": true, 00:07:52.172 "data_offset": 2048, 00:07:52.172 "data_size": 63488 00:07:52.172 } 00:07:52.172 ] 00:07:52.172 }' 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.172 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.740 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.741 [2024-12-14 04:57:03.507317] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.741 "name": "Existed_Raid", 00:07:52.741 "uuid": "9f7be38e-e675-4b78-884a-f86ed21122dc", 00:07:52.741 "strip_size_kb": 64, 00:07:52.741 "state": "configuring", 00:07:52.741 "raid_level": "raid0", 00:07:52.741 "superblock": true, 00:07:52.741 "num_base_bdevs": 3, 00:07:52.741 "num_base_bdevs_discovered": 1, 00:07:52.741 "num_base_bdevs_operational": 3, 00:07:52.741 "base_bdevs_list": [ 00:07:52.741 { 00:07:52.741 "name": null, 00:07:52.741 "uuid": "ef44ff50-eaf1-470f-b2f8-36771e96cb81", 00:07:52.741 "is_configured": false, 00:07:52.741 "data_offset": 0, 00:07:52.741 "data_size": 63488 00:07:52.741 }, 00:07:52.741 { 00:07:52.741 "name": null, 00:07:52.741 "uuid": "8fc05fc8-5e92-462f-9440-6f373faa34c2", 00:07:52.741 "is_configured": false, 00:07:52.741 "data_offset": 0, 00:07:52.741 "data_size": 63488 00:07:52.741 }, 00:07:52.741 { 00:07:52.741 "name": "BaseBdev3", 00:07:52.741 "uuid": "87080c98-2fff-4628-940d-1a2077681b8f", 00:07:52.741 "is_configured": true, 00:07:52.741 "data_offset": 2048, 00:07:52.741 "data_size": 63488 00:07:52.741 } 00:07:52.741 ] 00:07:52.741 }' 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.741 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.339 [2024-12-14 04:57:03.989028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.339 04:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.339 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.339 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.339 "name": "Existed_Raid", 00:07:53.339 "uuid": "9f7be38e-e675-4b78-884a-f86ed21122dc", 00:07:53.339 "strip_size_kb": 64, 00:07:53.339 "state": "configuring", 00:07:53.339 "raid_level": "raid0", 00:07:53.339 "superblock": true, 00:07:53.339 "num_base_bdevs": 3, 00:07:53.339 "num_base_bdevs_discovered": 2, 00:07:53.339 "num_base_bdevs_operational": 3, 00:07:53.339 "base_bdevs_list": [ 00:07:53.339 { 00:07:53.339 "name": null, 00:07:53.339 "uuid": "ef44ff50-eaf1-470f-b2f8-36771e96cb81", 00:07:53.339 "is_configured": false, 00:07:53.339 "data_offset": 0, 00:07:53.339 "data_size": 63488 00:07:53.339 }, 00:07:53.339 { 00:07:53.339 "name": "BaseBdev2", 00:07:53.339 "uuid": "8fc05fc8-5e92-462f-9440-6f373faa34c2", 00:07:53.339 "is_configured": true, 00:07:53.339 "data_offset": 2048, 00:07:53.339 "data_size": 63488 00:07:53.339 }, 00:07:53.339 { 00:07:53.339 "name": "BaseBdev3", 00:07:53.339 "uuid": "87080c98-2fff-4628-940d-1a2077681b8f", 00:07:53.339 "is_configured": true, 00:07:53.339 "data_offset": 2048, 00:07:53.339 "data_size": 63488 00:07:53.339 } 00:07:53.339 ] 00:07:53.339 }' 00:07:53.339 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.339 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.598 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.598 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:53.598 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.598 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.598 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.598 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:53.598 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.598 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.598 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:53.598 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.858 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.858 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ef44ff50-eaf1-470f-b2f8-36771e96cb81 00:07:53.858 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.858 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.858 [2024-12-14 04:57:04.523059] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:53.858 [2024-12-14 04:57:04.523330] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:07:53.858 [2024-12-14 04:57:04.523385] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:53.858 [2024-12-14 04:57:04.523656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:53.858 NewBaseBdev 00:07:53.858 [2024-12-14 04:57:04.523811] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:07:53.858 [2024-12-14 04:57:04.523822] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:07:53.858 [2024-12-14 04:57:04.523929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.858 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.858 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:53.858 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:07:53.858 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:53.858 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:53.858 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:53.858 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:53.858 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:53.858 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.858 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.858 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.858 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:53.858 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.858 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.858 [ 00:07:53.858 { 00:07:53.858 "name": "NewBaseBdev", 00:07:53.858 "aliases": [ 00:07:53.858 "ef44ff50-eaf1-470f-b2f8-36771e96cb81" 00:07:53.858 ], 00:07:53.858 "product_name": "Malloc disk", 00:07:53.858 "block_size": 512, 00:07:53.858 "num_blocks": 65536, 00:07:53.858 "uuid": "ef44ff50-eaf1-470f-b2f8-36771e96cb81", 00:07:53.858 "assigned_rate_limits": { 00:07:53.858 "rw_ios_per_sec": 0, 00:07:53.858 "rw_mbytes_per_sec": 0, 00:07:53.858 "r_mbytes_per_sec": 0, 00:07:53.858 "w_mbytes_per_sec": 0 00:07:53.858 }, 00:07:53.858 "claimed": true, 00:07:53.858 "claim_type": "exclusive_write", 00:07:53.858 "zoned": false, 00:07:53.858 "supported_io_types": { 00:07:53.858 "read": true, 00:07:53.858 "write": true, 00:07:53.858 "unmap": true, 00:07:53.858 "flush": true, 00:07:53.858 "reset": true, 00:07:53.858 "nvme_admin": false, 00:07:53.858 "nvme_io": false, 00:07:53.858 "nvme_io_md": false, 00:07:53.858 "write_zeroes": true, 00:07:53.858 "zcopy": true, 00:07:53.858 "get_zone_info": false, 00:07:53.858 "zone_management": false, 00:07:53.858 "zone_append": false, 00:07:53.858 "compare": false, 00:07:53.858 "compare_and_write": false, 00:07:53.858 "abort": true, 00:07:53.858 "seek_hole": false, 00:07:53.858 "seek_data": false, 00:07:53.858 "copy": true, 00:07:53.858 "nvme_iov_md": false 00:07:53.858 }, 00:07:53.858 "memory_domains": [ 00:07:53.858 { 00:07:53.858 "dma_device_id": "system", 00:07:53.858 "dma_device_type": 1 00:07:53.858 }, 00:07:53.858 { 00:07:53.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.858 "dma_device_type": 2 00:07:53.858 } 00:07:53.858 ], 00:07:53.858 "driver_specific": {} 00:07:53.858 } 00:07:53.858 ] 00:07:53.858 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.858 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:53.858 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:53.859 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.859 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.859 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:53.859 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.859 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:53.859 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.859 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.859 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.859 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.859 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.859 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.859 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.859 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.859 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.859 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.859 "name": "Existed_Raid", 00:07:53.859 "uuid": "9f7be38e-e675-4b78-884a-f86ed21122dc", 00:07:53.859 "strip_size_kb": 64, 00:07:53.859 "state": "online", 00:07:53.859 "raid_level": "raid0", 00:07:53.859 "superblock": true, 00:07:53.859 "num_base_bdevs": 3, 00:07:53.859 "num_base_bdevs_discovered": 3, 00:07:53.859 "num_base_bdevs_operational": 3, 00:07:53.859 "base_bdevs_list": [ 00:07:53.859 { 00:07:53.859 "name": "NewBaseBdev", 00:07:53.859 "uuid": "ef44ff50-eaf1-470f-b2f8-36771e96cb81", 00:07:53.859 "is_configured": true, 00:07:53.859 "data_offset": 2048, 00:07:53.859 "data_size": 63488 00:07:53.859 }, 00:07:53.859 { 00:07:53.859 "name": "BaseBdev2", 00:07:53.859 "uuid": "8fc05fc8-5e92-462f-9440-6f373faa34c2", 00:07:53.859 "is_configured": true, 00:07:53.859 "data_offset": 2048, 00:07:53.859 "data_size": 63488 00:07:53.859 }, 00:07:53.859 { 00:07:53.859 "name": "BaseBdev3", 00:07:53.859 "uuid": "87080c98-2fff-4628-940d-1a2077681b8f", 00:07:53.859 "is_configured": true, 00:07:53.859 "data_offset": 2048, 00:07:53.859 "data_size": 63488 00:07:53.859 } 00:07:53.859 ] 00:07:53.859 }' 00:07:53.859 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.859 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.118 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:54.118 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:54.118 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:54.118 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:54.118 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:54.118 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:54.118 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:54.118 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.118 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.118 04:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:54.118 [2024-12-14 04:57:04.974576] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.118 04:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:54.389 "name": "Existed_Raid", 00:07:54.389 "aliases": [ 00:07:54.389 "9f7be38e-e675-4b78-884a-f86ed21122dc" 00:07:54.389 ], 00:07:54.389 "product_name": "Raid Volume", 00:07:54.389 "block_size": 512, 00:07:54.389 "num_blocks": 190464, 00:07:54.389 "uuid": "9f7be38e-e675-4b78-884a-f86ed21122dc", 00:07:54.389 "assigned_rate_limits": { 00:07:54.389 "rw_ios_per_sec": 0, 00:07:54.389 "rw_mbytes_per_sec": 0, 00:07:54.389 "r_mbytes_per_sec": 0, 00:07:54.389 "w_mbytes_per_sec": 0 00:07:54.389 }, 00:07:54.389 "claimed": false, 00:07:54.389 "zoned": false, 00:07:54.389 "supported_io_types": { 00:07:54.389 "read": true, 00:07:54.389 "write": true, 00:07:54.389 "unmap": true, 00:07:54.389 "flush": true, 00:07:54.389 "reset": true, 00:07:54.389 "nvme_admin": false, 00:07:54.389 "nvme_io": false, 00:07:54.389 "nvme_io_md": false, 00:07:54.389 "write_zeroes": true, 00:07:54.389 "zcopy": false, 00:07:54.389 "get_zone_info": false, 00:07:54.389 "zone_management": false, 00:07:54.389 "zone_append": false, 00:07:54.389 "compare": false, 00:07:54.389 "compare_and_write": false, 00:07:54.389 "abort": false, 00:07:54.389 "seek_hole": false, 00:07:54.389 "seek_data": false, 00:07:54.389 "copy": false, 00:07:54.389 "nvme_iov_md": false 00:07:54.389 }, 00:07:54.389 "memory_domains": [ 00:07:54.389 { 00:07:54.389 "dma_device_id": "system", 00:07:54.389 "dma_device_type": 1 00:07:54.389 }, 00:07:54.389 { 00:07:54.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.389 "dma_device_type": 2 00:07:54.389 }, 00:07:54.389 { 00:07:54.389 "dma_device_id": "system", 00:07:54.389 "dma_device_type": 1 00:07:54.389 }, 00:07:54.389 { 00:07:54.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.389 "dma_device_type": 2 00:07:54.389 }, 00:07:54.389 { 00:07:54.389 "dma_device_id": "system", 00:07:54.389 "dma_device_type": 1 00:07:54.389 }, 00:07:54.389 { 00:07:54.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.389 "dma_device_type": 2 00:07:54.389 } 00:07:54.389 ], 00:07:54.389 "driver_specific": { 00:07:54.389 "raid": { 00:07:54.389 "uuid": "9f7be38e-e675-4b78-884a-f86ed21122dc", 00:07:54.389 "strip_size_kb": 64, 00:07:54.389 "state": "online", 00:07:54.389 "raid_level": "raid0", 00:07:54.389 "superblock": true, 00:07:54.389 "num_base_bdevs": 3, 00:07:54.389 "num_base_bdevs_discovered": 3, 00:07:54.389 "num_base_bdevs_operational": 3, 00:07:54.389 "base_bdevs_list": [ 00:07:54.389 { 00:07:54.389 "name": "NewBaseBdev", 00:07:54.389 "uuid": "ef44ff50-eaf1-470f-b2f8-36771e96cb81", 00:07:54.389 "is_configured": true, 00:07:54.389 "data_offset": 2048, 00:07:54.389 "data_size": 63488 00:07:54.389 }, 00:07:54.389 { 00:07:54.389 "name": "BaseBdev2", 00:07:54.389 "uuid": "8fc05fc8-5e92-462f-9440-6f373faa34c2", 00:07:54.389 "is_configured": true, 00:07:54.389 "data_offset": 2048, 00:07:54.389 "data_size": 63488 00:07:54.389 }, 00:07:54.389 { 00:07:54.389 "name": "BaseBdev3", 00:07:54.389 "uuid": "87080c98-2fff-4628-940d-1a2077681b8f", 00:07:54.389 "is_configured": true, 00:07:54.389 "data_offset": 2048, 00:07:54.389 "data_size": 63488 00:07:54.389 } 00:07:54.389 ] 00:07:54.389 } 00:07:54.389 } 00:07:54.389 }' 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:54.389 BaseBdev2 00:07:54.389 BaseBdev3' 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.389 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.389 [2024-12-14 04:57:05.257816] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:54.389 [2024-12-14 04:57:05.257841] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.390 [2024-12-14 04:57:05.257915] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.390 [2024-12-14 04:57:05.257966] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.390 [2024-12-14 04:57:05.257977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:07:54.390 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.390 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75663 00:07:54.390 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75663 ']' 00:07:54.390 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 75663 00:07:54.390 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:54.650 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:54.650 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75663 00:07:54.650 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:54.650 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:54.650 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75663' 00:07:54.650 killing process with pid 75663 00:07:54.650 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 75663 00:07:54.650 [2024-12-14 04:57:05.306137] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:54.650 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 75663 00:07:54.650 [2024-12-14 04:57:05.337878] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.909 04:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:54.909 00:07:54.909 real 0m8.597s 00:07:54.909 user 0m14.691s 00:07:54.909 sys 0m1.665s 00:07:54.909 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.909 04:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.909 ************************************ 00:07:54.909 END TEST raid_state_function_test_sb 00:07:54.909 ************************************ 00:07:54.909 04:57:05 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:07:54.909 04:57:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:54.909 04:57:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.909 04:57:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.909 ************************************ 00:07:54.909 START TEST raid_superblock_test 00:07:54.909 ************************************ 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76267 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76267 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 76267 ']' 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:54.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:54.909 04:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.909 [2024-12-14 04:57:05.742760] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:54.909 [2024-12-14 04:57:05.742903] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76267 ] 00:07:55.169 [2024-12-14 04:57:05.904033] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.169 [2024-12-14 04:57:05.950286] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.169 [2024-12-14 04:57:05.993156] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.169 [2024-12-14 04:57:05.993190] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.738 malloc1 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.738 [2024-12-14 04:57:06.592318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:55.738 [2024-12-14 04:57:06.592464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.738 [2024-12-14 04:57:06.592506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:55.738 [2024-12-14 04:57:06.592542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.738 [2024-12-14 04:57:06.594663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.738 [2024-12-14 04:57:06.594752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:55.738 pt1 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.738 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.999 malloc2 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.999 [2024-12-14 04:57:06.641279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:55.999 [2024-12-14 04:57:06.641475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.999 [2024-12-14 04:57:06.641560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:55.999 [2024-12-14 04:57:06.641643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.999 [2024-12-14 04:57:06.646485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.999 [2024-12-14 04:57:06.646636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:55.999 pt2 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.999 malloc3 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.999 [2024-12-14 04:57:06.672254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:55.999 [2024-12-14 04:57:06.672342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.999 [2024-12-14 04:57:06.672377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:55.999 [2024-12-14 04:57:06.672406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.999 [2024-12-14 04:57:06.674459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.999 [2024-12-14 04:57:06.674528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:55.999 pt3 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.999 [2024-12-14 04:57:06.684283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:55.999 [2024-12-14 04:57:06.686084] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:55.999 [2024-12-14 04:57:06.686210] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:55.999 [2024-12-14 04:57:06.686373] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:55.999 [2024-12-14 04:57:06.686418] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:55.999 [2024-12-14 04:57:06.686667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:55.999 [2024-12-14 04:57:06.686832] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:55.999 [2024-12-14 04:57:06.686877] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:55.999 [2024-12-14 04:57:06.687041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.999 "name": "raid_bdev1", 00:07:55.999 "uuid": "56b7379f-2dff-4391-9d4e-f4f3379d911b", 00:07:55.999 "strip_size_kb": 64, 00:07:55.999 "state": "online", 00:07:55.999 "raid_level": "raid0", 00:07:55.999 "superblock": true, 00:07:55.999 "num_base_bdevs": 3, 00:07:55.999 "num_base_bdevs_discovered": 3, 00:07:55.999 "num_base_bdevs_operational": 3, 00:07:55.999 "base_bdevs_list": [ 00:07:55.999 { 00:07:55.999 "name": "pt1", 00:07:55.999 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.999 "is_configured": true, 00:07:55.999 "data_offset": 2048, 00:07:55.999 "data_size": 63488 00:07:55.999 }, 00:07:55.999 { 00:07:55.999 "name": "pt2", 00:07:55.999 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.999 "is_configured": true, 00:07:55.999 "data_offset": 2048, 00:07:55.999 "data_size": 63488 00:07:55.999 }, 00:07:55.999 { 00:07:55.999 "name": "pt3", 00:07:55.999 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:55.999 "is_configured": true, 00:07:55.999 "data_offset": 2048, 00:07:55.999 "data_size": 63488 00:07:55.999 } 00:07:55.999 ] 00:07:55.999 }' 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.999 04:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.258 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:56.258 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:56.258 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:56.258 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:56.258 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:56.258 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:56.258 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.258 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.258 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.258 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:56.258 [2024-12-14 04:57:07.119777] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.258 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:56.518 "name": "raid_bdev1", 00:07:56.518 "aliases": [ 00:07:56.518 "56b7379f-2dff-4391-9d4e-f4f3379d911b" 00:07:56.518 ], 00:07:56.518 "product_name": "Raid Volume", 00:07:56.518 "block_size": 512, 00:07:56.518 "num_blocks": 190464, 00:07:56.518 "uuid": "56b7379f-2dff-4391-9d4e-f4f3379d911b", 00:07:56.518 "assigned_rate_limits": { 00:07:56.518 "rw_ios_per_sec": 0, 00:07:56.518 "rw_mbytes_per_sec": 0, 00:07:56.518 "r_mbytes_per_sec": 0, 00:07:56.518 "w_mbytes_per_sec": 0 00:07:56.518 }, 00:07:56.518 "claimed": false, 00:07:56.518 "zoned": false, 00:07:56.518 "supported_io_types": { 00:07:56.518 "read": true, 00:07:56.518 "write": true, 00:07:56.518 "unmap": true, 00:07:56.518 "flush": true, 00:07:56.518 "reset": true, 00:07:56.518 "nvme_admin": false, 00:07:56.518 "nvme_io": false, 00:07:56.518 "nvme_io_md": false, 00:07:56.518 "write_zeroes": true, 00:07:56.518 "zcopy": false, 00:07:56.518 "get_zone_info": false, 00:07:56.518 "zone_management": false, 00:07:56.518 "zone_append": false, 00:07:56.518 "compare": false, 00:07:56.518 "compare_and_write": false, 00:07:56.518 "abort": false, 00:07:56.518 "seek_hole": false, 00:07:56.518 "seek_data": false, 00:07:56.518 "copy": false, 00:07:56.518 "nvme_iov_md": false 00:07:56.518 }, 00:07:56.518 "memory_domains": [ 00:07:56.518 { 00:07:56.518 "dma_device_id": "system", 00:07:56.518 "dma_device_type": 1 00:07:56.518 }, 00:07:56.518 { 00:07:56.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.518 "dma_device_type": 2 00:07:56.518 }, 00:07:56.518 { 00:07:56.518 "dma_device_id": "system", 00:07:56.518 "dma_device_type": 1 00:07:56.518 }, 00:07:56.518 { 00:07:56.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.518 "dma_device_type": 2 00:07:56.518 }, 00:07:56.518 { 00:07:56.518 "dma_device_id": "system", 00:07:56.518 "dma_device_type": 1 00:07:56.518 }, 00:07:56.518 { 00:07:56.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.518 "dma_device_type": 2 00:07:56.518 } 00:07:56.518 ], 00:07:56.518 "driver_specific": { 00:07:56.518 "raid": { 00:07:56.518 "uuid": "56b7379f-2dff-4391-9d4e-f4f3379d911b", 00:07:56.518 "strip_size_kb": 64, 00:07:56.518 "state": "online", 00:07:56.518 "raid_level": "raid0", 00:07:56.518 "superblock": true, 00:07:56.518 "num_base_bdevs": 3, 00:07:56.518 "num_base_bdevs_discovered": 3, 00:07:56.518 "num_base_bdevs_operational": 3, 00:07:56.518 "base_bdevs_list": [ 00:07:56.518 { 00:07:56.518 "name": "pt1", 00:07:56.518 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.518 "is_configured": true, 00:07:56.518 "data_offset": 2048, 00:07:56.518 "data_size": 63488 00:07:56.518 }, 00:07:56.518 { 00:07:56.518 "name": "pt2", 00:07:56.518 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.518 "is_configured": true, 00:07:56.518 "data_offset": 2048, 00:07:56.518 "data_size": 63488 00:07:56.518 }, 00:07:56.518 { 00:07:56.518 "name": "pt3", 00:07:56.518 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:56.518 "is_configured": true, 00:07:56.518 "data_offset": 2048, 00:07:56.518 "data_size": 63488 00:07:56.518 } 00:07:56.518 ] 00:07:56.518 } 00:07:56.518 } 00:07:56.518 }' 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:56.518 pt2 00:07:56.518 pt3' 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.518 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.519 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.519 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.519 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.519 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.519 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.519 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:56.519 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.519 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.519 [2024-12-14 04:57:07.375544] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=56b7379f-2dff-4391-9d4e-f4f3379d911b 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 56b7379f-2dff-4391-9d4e-f4f3379d911b ']' 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.779 [2024-12-14 04:57:07.423300] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:56.779 [2024-12-14 04:57:07.423377] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.779 [2024-12-14 04:57:07.423499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.779 [2024-12-14 04:57:07.423603] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.779 [2024-12-14 04:57:07.423666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.779 [2024-12-14 04:57:07.567314] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:56.779 [2024-12-14 04:57:07.569207] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:56.779 [2024-12-14 04:57:07.569255] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:07:56.779 [2024-12-14 04:57:07.569306] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:56.779 [2024-12-14 04:57:07.569347] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:56.779 [2024-12-14 04:57:07.569365] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:07:56.779 [2024-12-14 04:57:07.569377] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:56.779 [2024-12-14 04:57:07.569387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:56.779 request: 00:07:56.779 { 00:07:56.779 "name": "raid_bdev1", 00:07:56.779 "raid_level": "raid0", 00:07:56.779 "base_bdevs": [ 00:07:56.779 "malloc1", 00:07:56.779 "malloc2", 00:07:56.779 "malloc3" 00:07:56.779 ], 00:07:56.779 "strip_size_kb": 64, 00:07:56.779 "superblock": false, 00:07:56.779 "method": "bdev_raid_create", 00:07:56.779 "req_id": 1 00:07:56.779 } 00:07:56.779 Got JSON-RPC error response 00:07:56.779 response: 00:07:56.779 { 00:07:56.779 "code": -17, 00:07:56.779 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:56.779 } 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.779 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.779 [2024-12-14 04:57:07.635294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:56.779 [2024-12-14 04:57:07.635381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.780 [2024-12-14 04:57:07.635414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:56.780 [2024-12-14 04:57:07.635442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.780 [2024-12-14 04:57:07.637478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.780 [2024-12-14 04:57:07.637549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:56.780 [2024-12-14 04:57:07.637633] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:56.780 [2024-12-14 04:57:07.637702] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:56.780 pt1 00:07:56.780 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.780 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:07:56.780 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.780 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.780 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:56.780 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.780 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:56.780 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.780 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.780 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.780 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.780 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.780 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.780 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.780 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.039 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.039 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.039 "name": "raid_bdev1", 00:07:57.039 "uuid": "56b7379f-2dff-4391-9d4e-f4f3379d911b", 00:07:57.039 "strip_size_kb": 64, 00:07:57.039 "state": "configuring", 00:07:57.039 "raid_level": "raid0", 00:07:57.039 "superblock": true, 00:07:57.039 "num_base_bdevs": 3, 00:07:57.039 "num_base_bdevs_discovered": 1, 00:07:57.039 "num_base_bdevs_operational": 3, 00:07:57.039 "base_bdevs_list": [ 00:07:57.039 { 00:07:57.039 "name": "pt1", 00:07:57.039 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.039 "is_configured": true, 00:07:57.039 "data_offset": 2048, 00:07:57.039 "data_size": 63488 00:07:57.039 }, 00:07:57.039 { 00:07:57.039 "name": null, 00:07:57.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.039 "is_configured": false, 00:07:57.039 "data_offset": 2048, 00:07:57.039 "data_size": 63488 00:07:57.039 }, 00:07:57.039 { 00:07:57.039 "name": null, 00:07:57.039 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:57.039 "is_configured": false, 00:07:57.039 "data_offset": 2048, 00:07:57.039 "data_size": 63488 00:07:57.039 } 00:07:57.039 ] 00:07:57.039 }' 00:07:57.039 04:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.039 04:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.298 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:07:57.298 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:57.298 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.298 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.298 [2024-12-14 04:57:08.079295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:57.298 [2024-12-14 04:57:08.079416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.298 [2024-12-14 04:57:08.079440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:07:57.298 [2024-12-14 04:57:08.079453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.298 [2024-12-14 04:57:08.079834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.298 [2024-12-14 04:57:08.079854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:57.298 [2024-12-14 04:57:08.079919] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:57.298 [2024-12-14 04:57:08.079942] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:57.298 pt2 00:07:57.298 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.298 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:07:57.298 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.298 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.298 [2024-12-14 04:57:08.091336] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:07:57.298 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.298 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:07:57.298 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.298 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.298 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.298 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.298 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:57.298 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.298 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.299 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.299 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.299 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.299 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.299 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.299 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.299 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.299 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.299 "name": "raid_bdev1", 00:07:57.299 "uuid": "56b7379f-2dff-4391-9d4e-f4f3379d911b", 00:07:57.299 "strip_size_kb": 64, 00:07:57.299 "state": "configuring", 00:07:57.299 "raid_level": "raid0", 00:07:57.299 "superblock": true, 00:07:57.299 "num_base_bdevs": 3, 00:07:57.299 "num_base_bdevs_discovered": 1, 00:07:57.299 "num_base_bdevs_operational": 3, 00:07:57.299 "base_bdevs_list": [ 00:07:57.299 { 00:07:57.299 "name": "pt1", 00:07:57.299 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.299 "is_configured": true, 00:07:57.299 "data_offset": 2048, 00:07:57.299 "data_size": 63488 00:07:57.299 }, 00:07:57.299 { 00:07:57.299 "name": null, 00:07:57.299 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.299 "is_configured": false, 00:07:57.299 "data_offset": 0, 00:07:57.299 "data_size": 63488 00:07:57.299 }, 00:07:57.299 { 00:07:57.299 "name": null, 00:07:57.299 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:57.299 "is_configured": false, 00:07:57.299 "data_offset": 2048, 00:07:57.299 "data_size": 63488 00:07:57.299 } 00:07:57.299 ] 00:07:57.299 }' 00:07:57.299 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.299 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.867 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:57.867 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:57.867 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:57.867 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.867 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.867 [2024-12-14 04:57:08.491319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:57.867 [2024-12-14 04:57:08.491383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.868 [2024-12-14 04:57:08.491407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:07:57.868 [2024-12-14 04:57:08.491418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.868 [2024-12-14 04:57:08.491807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.868 [2024-12-14 04:57:08.491831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:57.868 [2024-12-14 04:57:08.491905] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:57.868 [2024-12-14 04:57:08.491931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:57.868 pt2 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.868 [2024-12-14 04:57:08.503292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:57.868 [2024-12-14 04:57:08.503337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.868 [2024-12-14 04:57:08.503355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:07:57.868 [2024-12-14 04:57:08.503362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.868 [2024-12-14 04:57:08.503705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.868 [2024-12-14 04:57:08.503727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:57.868 [2024-12-14 04:57:08.503786] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:07:57.868 [2024-12-14 04:57:08.503810] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:57.868 [2024-12-14 04:57:08.503900] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:57.868 [2024-12-14 04:57:08.503912] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:57.868 [2024-12-14 04:57:08.504134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:57.868 [2024-12-14 04:57:08.504255] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:57.868 [2024-12-14 04:57:08.504270] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:57.868 [2024-12-14 04:57:08.504365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.868 pt3 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.868 "name": "raid_bdev1", 00:07:57.868 "uuid": "56b7379f-2dff-4391-9d4e-f4f3379d911b", 00:07:57.868 "strip_size_kb": 64, 00:07:57.868 "state": "online", 00:07:57.868 "raid_level": "raid0", 00:07:57.868 "superblock": true, 00:07:57.868 "num_base_bdevs": 3, 00:07:57.868 "num_base_bdevs_discovered": 3, 00:07:57.868 "num_base_bdevs_operational": 3, 00:07:57.868 "base_bdevs_list": [ 00:07:57.868 { 00:07:57.868 "name": "pt1", 00:07:57.868 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.868 "is_configured": true, 00:07:57.868 "data_offset": 2048, 00:07:57.868 "data_size": 63488 00:07:57.868 }, 00:07:57.868 { 00:07:57.868 "name": "pt2", 00:07:57.868 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.868 "is_configured": true, 00:07:57.868 "data_offset": 2048, 00:07:57.868 "data_size": 63488 00:07:57.868 }, 00:07:57.868 { 00:07:57.868 "name": "pt3", 00:07:57.868 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:57.868 "is_configured": true, 00:07:57.868 "data_offset": 2048, 00:07:57.868 "data_size": 63488 00:07:57.868 } 00:07:57.868 ] 00:07:57.868 }' 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.868 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.128 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:58.128 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:58.128 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:58.128 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:58.128 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:58.128 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:58.128 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:58.128 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.128 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.128 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.128 [2024-12-14 04:57:08.899550] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.128 04:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.128 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:58.128 "name": "raid_bdev1", 00:07:58.128 "aliases": [ 00:07:58.128 "56b7379f-2dff-4391-9d4e-f4f3379d911b" 00:07:58.128 ], 00:07:58.128 "product_name": "Raid Volume", 00:07:58.128 "block_size": 512, 00:07:58.128 "num_blocks": 190464, 00:07:58.128 "uuid": "56b7379f-2dff-4391-9d4e-f4f3379d911b", 00:07:58.128 "assigned_rate_limits": { 00:07:58.128 "rw_ios_per_sec": 0, 00:07:58.128 "rw_mbytes_per_sec": 0, 00:07:58.128 "r_mbytes_per_sec": 0, 00:07:58.128 "w_mbytes_per_sec": 0 00:07:58.128 }, 00:07:58.128 "claimed": false, 00:07:58.128 "zoned": false, 00:07:58.128 "supported_io_types": { 00:07:58.128 "read": true, 00:07:58.128 "write": true, 00:07:58.128 "unmap": true, 00:07:58.128 "flush": true, 00:07:58.128 "reset": true, 00:07:58.128 "nvme_admin": false, 00:07:58.128 "nvme_io": false, 00:07:58.128 "nvme_io_md": false, 00:07:58.128 "write_zeroes": true, 00:07:58.128 "zcopy": false, 00:07:58.128 "get_zone_info": false, 00:07:58.128 "zone_management": false, 00:07:58.128 "zone_append": false, 00:07:58.128 "compare": false, 00:07:58.128 "compare_and_write": false, 00:07:58.128 "abort": false, 00:07:58.128 "seek_hole": false, 00:07:58.128 "seek_data": false, 00:07:58.128 "copy": false, 00:07:58.128 "nvme_iov_md": false 00:07:58.128 }, 00:07:58.128 "memory_domains": [ 00:07:58.128 { 00:07:58.128 "dma_device_id": "system", 00:07:58.128 "dma_device_type": 1 00:07:58.128 }, 00:07:58.128 { 00:07:58.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.128 "dma_device_type": 2 00:07:58.128 }, 00:07:58.128 { 00:07:58.128 "dma_device_id": "system", 00:07:58.128 "dma_device_type": 1 00:07:58.128 }, 00:07:58.128 { 00:07:58.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.128 "dma_device_type": 2 00:07:58.128 }, 00:07:58.128 { 00:07:58.128 "dma_device_id": "system", 00:07:58.128 "dma_device_type": 1 00:07:58.128 }, 00:07:58.128 { 00:07:58.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.128 "dma_device_type": 2 00:07:58.128 } 00:07:58.128 ], 00:07:58.128 "driver_specific": { 00:07:58.128 "raid": { 00:07:58.128 "uuid": "56b7379f-2dff-4391-9d4e-f4f3379d911b", 00:07:58.128 "strip_size_kb": 64, 00:07:58.128 "state": "online", 00:07:58.128 "raid_level": "raid0", 00:07:58.128 "superblock": true, 00:07:58.128 "num_base_bdevs": 3, 00:07:58.128 "num_base_bdevs_discovered": 3, 00:07:58.128 "num_base_bdevs_operational": 3, 00:07:58.128 "base_bdevs_list": [ 00:07:58.128 { 00:07:58.128 "name": "pt1", 00:07:58.128 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.128 "is_configured": true, 00:07:58.128 "data_offset": 2048, 00:07:58.128 "data_size": 63488 00:07:58.128 }, 00:07:58.128 { 00:07:58.128 "name": "pt2", 00:07:58.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.128 "is_configured": true, 00:07:58.128 "data_offset": 2048, 00:07:58.128 "data_size": 63488 00:07:58.128 }, 00:07:58.128 { 00:07:58.128 "name": "pt3", 00:07:58.128 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:58.128 "is_configured": true, 00:07:58.128 "data_offset": 2048, 00:07:58.128 "data_size": 63488 00:07:58.128 } 00:07:58.128 ] 00:07:58.128 } 00:07:58.128 } 00:07:58.128 }' 00:07:58.128 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:58.128 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:58.128 pt2 00:07:58.128 pt3' 00:07:58.128 04:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.387 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:58.388 [2024-12-14 04:57:09.183534] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.388 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.388 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 56b7379f-2dff-4391-9d4e-f4f3379d911b '!=' 56b7379f-2dff-4391-9d4e-f4f3379d911b ']' 00:07:58.388 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:58.388 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:58.388 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:58.388 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76267 00:07:58.388 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 76267 ']' 00:07:58.388 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 76267 00:07:58.388 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:58.388 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.388 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76267 00:07:58.388 killing process with pid 76267 00:07:58.388 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:58.388 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:58.388 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76267' 00:07:58.388 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 76267 00:07:58.388 [2024-12-14 04:57:09.268077] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:58.388 [2024-12-14 04:57:09.268194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.388 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 76267 00:07:58.388 [2024-12-14 04:57:09.268256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.388 [2024-12-14 04:57:09.268265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:58.647 [2024-12-14 04:57:09.301384] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:58.908 04:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:58.908 00:07:58.908 real 0m3.888s 00:07:58.908 user 0m6.113s 00:07:58.908 sys 0m0.789s 00:07:58.908 ************************************ 00:07:58.908 END TEST raid_superblock_test 00:07:58.908 ************************************ 00:07:58.908 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.908 04:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.908 04:57:09 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:07:58.908 04:57:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:58.908 04:57:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.908 04:57:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:58.908 ************************************ 00:07:58.908 START TEST raid_read_error_test 00:07:58.908 ************************************ 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HL6zvP78eM 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76509 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76509 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 76509 ']' 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:58.908 04:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.908 [2024-12-14 04:57:09.723656] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:58.908 [2024-12-14 04:57:09.723864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76509 ] 00:07:59.168 [2024-12-14 04:57:09.884133] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.168 [2024-12-14 04:57:09.929458] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.168 [2024-12-14 04:57:09.971304] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.168 [2024-12-14 04:57:09.971417] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.737 BaseBdev1_malloc 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.737 true 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.737 [2024-12-14 04:57:10.581755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:59.737 [2024-12-14 04:57:10.581811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.737 [2024-12-14 04:57:10.581831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:59.737 [2024-12-14 04:57:10.581840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.737 [2024-12-14 04:57:10.583921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.737 [2024-12-14 04:57:10.584017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:59.737 BaseBdev1 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.737 BaseBdev2_malloc 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.737 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.002 true 00:08:00.002 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.002 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:00.002 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.002 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.002 [2024-12-14 04:57:10.630601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:00.002 [2024-12-14 04:57:10.630687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.002 [2024-12-14 04:57:10.630725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:00.002 [2024-12-14 04:57:10.630733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.002 [2024-12-14 04:57:10.632706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.002 [2024-12-14 04:57:10.632743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:00.002 BaseBdev2 00:08:00.002 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.002 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:00.002 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:00.002 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.002 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.002 BaseBdev3_malloc 00:08:00.002 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.002 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:00.002 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.002 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.002 true 00:08:00.002 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.002 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:00.002 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.002 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.002 [2024-12-14 04:57:10.671069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:00.002 [2024-12-14 04:57:10.671113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.002 [2024-12-14 04:57:10.671129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:00.002 [2024-12-14 04:57:10.671137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.002 [2024-12-14 04:57:10.673204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.002 [2024-12-14 04:57:10.673236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:00.002 BaseBdev3 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.003 [2024-12-14 04:57:10.683114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.003 [2024-12-14 04:57:10.684960] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:00.003 [2024-12-14 04:57:10.685078] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:00.003 [2024-12-14 04:57:10.685294] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:00.003 [2024-12-14 04:57:10.685344] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:00.003 [2024-12-14 04:57:10.685585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:00.003 [2024-12-14 04:57:10.685741] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:00.003 [2024-12-14 04:57:10.685779] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:00.003 [2024-12-14 04:57:10.685937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.003 "name": "raid_bdev1", 00:08:00.003 "uuid": "2eb30488-c6f4-46d3-bc9e-00d5e177ab4b", 00:08:00.003 "strip_size_kb": 64, 00:08:00.003 "state": "online", 00:08:00.003 "raid_level": "raid0", 00:08:00.003 "superblock": true, 00:08:00.003 "num_base_bdevs": 3, 00:08:00.003 "num_base_bdevs_discovered": 3, 00:08:00.003 "num_base_bdevs_operational": 3, 00:08:00.003 "base_bdevs_list": [ 00:08:00.003 { 00:08:00.003 "name": "BaseBdev1", 00:08:00.003 "uuid": "9f7d9eaf-5a35-5aab-9e3b-5fb4743d2a69", 00:08:00.003 "is_configured": true, 00:08:00.003 "data_offset": 2048, 00:08:00.003 "data_size": 63488 00:08:00.003 }, 00:08:00.003 { 00:08:00.003 "name": "BaseBdev2", 00:08:00.003 "uuid": "3a1c88a9-0540-546c-86f1-92fa07ad2421", 00:08:00.003 "is_configured": true, 00:08:00.003 "data_offset": 2048, 00:08:00.003 "data_size": 63488 00:08:00.003 }, 00:08:00.003 { 00:08:00.003 "name": "BaseBdev3", 00:08:00.003 "uuid": "24c783b6-4f24-5b4b-8a25-233c108434c1", 00:08:00.003 "is_configured": true, 00:08:00.003 "data_offset": 2048, 00:08:00.003 "data_size": 63488 00:08:00.003 } 00:08:00.003 ] 00:08:00.003 }' 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.003 04:57:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.604 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:00.604 04:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:00.604 [2024-12-14 04:57:11.250523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.544 "name": "raid_bdev1", 00:08:01.544 "uuid": "2eb30488-c6f4-46d3-bc9e-00d5e177ab4b", 00:08:01.544 "strip_size_kb": 64, 00:08:01.544 "state": "online", 00:08:01.544 "raid_level": "raid0", 00:08:01.544 "superblock": true, 00:08:01.544 "num_base_bdevs": 3, 00:08:01.544 "num_base_bdevs_discovered": 3, 00:08:01.544 "num_base_bdevs_operational": 3, 00:08:01.544 "base_bdevs_list": [ 00:08:01.544 { 00:08:01.544 "name": "BaseBdev1", 00:08:01.544 "uuid": "9f7d9eaf-5a35-5aab-9e3b-5fb4743d2a69", 00:08:01.544 "is_configured": true, 00:08:01.544 "data_offset": 2048, 00:08:01.544 "data_size": 63488 00:08:01.544 }, 00:08:01.544 { 00:08:01.544 "name": "BaseBdev2", 00:08:01.544 "uuid": "3a1c88a9-0540-546c-86f1-92fa07ad2421", 00:08:01.544 "is_configured": true, 00:08:01.544 "data_offset": 2048, 00:08:01.544 "data_size": 63488 00:08:01.544 }, 00:08:01.544 { 00:08:01.544 "name": "BaseBdev3", 00:08:01.544 "uuid": "24c783b6-4f24-5b4b-8a25-233c108434c1", 00:08:01.544 "is_configured": true, 00:08:01.544 "data_offset": 2048, 00:08:01.544 "data_size": 63488 00:08:01.544 } 00:08:01.544 ] 00:08:01.544 }' 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.544 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.804 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:01.804 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.804 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.804 [2024-12-14 04:57:12.666470] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:01.804 [2024-12-14 04:57:12.666503] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:01.804 [2024-12-14 04:57:12.669015] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:01.804 [2024-12-14 04:57:12.669073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.804 [2024-12-14 04:57:12.669108] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:01.804 [2024-12-14 04:57:12.669119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:01.804 { 00:08:01.804 "results": [ 00:08:01.804 { 00:08:01.804 "job": "raid_bdev1", 00:08:01.804 "core_mask": "0x1", 00:08:01.804 "workload": "randrw", 00:08:01.804 "percentage": 50, 00:08:01.804 "status": "finished", 00:08:01.804 "queue_depth": 1, 00:08:01.804 "io_size": 131072, 00:08:01.804 "runtime": 1.416839, 00:08:01.804 "iops": 17210.141730994135, 00:08:01.804 "mibps": 2151.267716374267, 00:08:01.804 "io_failed": 1, 00:08:01.804 "io_timeout": 0, 00:08:01.804 "avg_latency_us": 80.5089815218569, 00:08:01.804 "min_latency_us": 24.593886462882097, 00:08:01.804 "max_latency_us": 1445.2262008733624 00:08:01.804 } 00:08:01.804 ], 00:08:01.804 "core_count": 1 00:08:01.804 } 00:08:01.804 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.804 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76509 00:08:01.804 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 76509 ']' 00:08:01.804 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 76509 00:08:01.804 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:01.804 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:01.804 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76509 00:08:02.065 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:02.065 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:02.065 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76509' 00:08:02.065 killing process with pid 76509 00:08:02.065 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 76509 00:08:02.065 [2024-12-14 04:57:12.715815] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.065 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 76509 00:08:02.065 [2024-12-14 04:57:12.741445] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:02.325 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HL6zvP78eM 00:08:02.325 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:02.325 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:02.325 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:02.325 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:02.325 ************************************ 00:08:02.325 END TEST raid_read_error_test 00:08:02.325 ************************************ 00:08:02.325 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:02.325 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:02.325 04:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:02.325 00:08:02.325 real 0m3.369s 00:08:02.325 user 0m4.313s 00:08:02.325 sys 0m0.528s 00:08:02.325 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.325 04:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.325 04:57:13 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:02.325 04:57:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:02.325 04:57:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.325 04:57:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:02.325 ************************************ 00:08:02.325 START TEST raid_write_error_test 00:08:02.325 ************************************ 00:08:02.325 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:08:02.325 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:02.325 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:02.325 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:02.325 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:02.325 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:02.325 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:02.325 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:02.325 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:02.325 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:02.325 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:02.325 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YgVQMnpafs 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76638 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76638 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 76638 ']' 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.326 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.326 [2024-12-14 04:57:13.160663] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:02.326 [2024-12-14 04:57:13.160795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76638 ] 00:08:02.585 [2024-12-14 04:57:13.318778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.585 [2024-12-14 04:57:13.364809] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.585 [2024-12-14 04:57:13.407373] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.585 [2024-12-14 04:57:13.407493] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.156 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:03.156 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:03.156 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:03.156 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:03.156 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.156 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.156 BaseBdev1_malloc 00:08:03.156 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.156 04:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:03.156 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.156 04:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.156 true 00:08:03.156 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.156 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:03.156 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.156 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.156 [2024-12-14 04:57:14.009641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:03.156 [2024-12-14 04:57:14.009697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.156 [2024-12-14 04:57:14.009719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:03.156 [2024-12-14 04:57:14.009727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.156 [2024-12-14 04:57:14.011754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.156 [2024-12-14 04:57:14.011787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:03.156 BaseBdev1 00:08:03.156 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.156 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:03.156 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:03.156 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.156 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.512 BaseBdev2_malloc 00:08:03.512 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.512 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:03.512 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.512 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.512 true 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.513 [2024-12-14 04:57:14.064043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:03.513 [2024-12-14 04:57:14.064107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.513 [2024-12-14 04:57:14.064136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:03.513 [2024-12-14 04:57:14.064149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.513 [2024-12-14 04:57:14.067331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.513 [2024-12-14 04:57:14.067375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:03.513 BaseBdev2 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.513 BaseBdev3_malloc 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.513 true 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.513 [2024-12-14 04:57:14.104988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:03.513 [2024-12-14 04:57:14.105029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.513 [2024-12-14 04:57:14.105047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:03.513 [2024-12-14 04:57:14.105055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.513 [2024-12-14 04:57:14.107014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.513 [2024-12-14 04:57:14.107046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:03.513 BaseBdev3 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.513 [2024-12-14 04:57:14.117026] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:03.513 [2024-12-14 04:57:14.118817] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:03.513 [2024-12-14 04:57:14.118910] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:03.513 [2024-12-14 04:57:14.119075] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:03.513 [2024-12-14 04:57:14.119100] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:03.513 [2024-12-14 04:57:14.119362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:03.513 [2024-12-14 04:57:14.119487] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:03.513 [2024-12-14 04:57:14.119500] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:03.513 [2024-12-14 04:57:14.119624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.513 "name": "raid_bdev1", 00:08:03.513 "uuid": "d6397b59-10dc-4e73-92db-786e22e06667", 00:08:03.513 "strip_size_kb": 64, 00:08:03.513 "state": "online", 00:08:03.513 "raid_level": "raid0", 00:08:03.513 "superblock": true, 00:08:03.513 "num_base_bdevs": 3, 00:08:03.513 "num_base_bdevs_discovered": 3, 00:08:03.513 "num_base_bdevs_operational": 3, 00:08:03.513 "base_bdevs_list": [ 00:08:03.513 { 00:08:03.513 "name": "BaseBdev1", 00:08:03.513 "uuid": "387f1f72-bdce-5ecd-97f6-31a8ed8a5f9d", 00:08:03.513 "is_configured": true, 00:08:03.513 "data_offset": 2048, 00:08:03.513 "data_size": 63488 00:08:03.513 }, 00:08:03.513 { 00:08:03.513 "name": "BaseBdev2", 00:08:03.513 "uuid": "2ab617b5-ece6-5eef-ab00-e7a96bd8eb6e", 00:08:03.513 "is_configured": true, 00:08:03.513 "data_offset": 2048, 00:08:03.513 "data_size": 63488 00:08:03.513 }, 00:08:03.513 { 00:08:03.513 "name": "BaseBdev3", 00:08:03.513 "uuid": "5012df38-14e9-529e-be55-22ca7c340a0e", 00:08:03.513 "is_configured": true, 00:08:03.513 "data_offset": 2048, 00:08:03.513 "data_size": 63488 00:08:03.513 } 00:08:03.513 ] 00:08:03.513 }' 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.513 04:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.777 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:03.777 04:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:03.777 [2024-12-14 04:57:14.632566] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:04.714 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:04.714 04:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.714 04:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.714 04:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.714 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:04.714 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:04.714 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:04.714 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:04.714 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.714 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.714 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.714 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.714 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.714 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.714 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.714 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.714 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.714 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.714 04:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.714 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.714 04:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.714 04:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.974 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.974 "name": "raid_bdev1", 00:08:04.974 "uuid": "d6397b59-10dc-4e73-92db-786e22e06667", 00:08:04.974 "strip_size_kb": 64, 00:08:04.974 "state": "online", 00:08:04.974 "raid_level": "raid0", 00:08:04.974 "superblock": true, 00:08:04.974 "num_base_bdevs": 3, 00:08:04.974 "num_base_bdevs_discovered": 3, 00:08:04.974 "num_base_bdevs_operational": 3, 00:08:04.974 "base_bdevs_list": [ 00:08:04.974 { 00:08:04.974 "name": "BaseBdev1", 00:08:04.974 "uuid": "387f1f72-bdce-5ecd-97f6-31a8ed8a5f9d", 00:08:04.974 "is_configured": true, 00:08:04.974 "data_offset": 2048, 00:08:04.974 "data_size": 63488 00:08:04.974 }, 00:08:04.974 { 00:08:04.974 "name": "BaseBdev2", 00:08:04.974 "uuid": "2ab617b5-ece6-5eef-ab00-e7a96bd8eb6e", 00:08:04.974 "is_configured": true, 00:08:04.974 "data_offset": 2048, 00:08:04.974 "data_size": 63488 00:08:04.974 }, 00:08:04.974 { 00:08:04.974 "name": "BaseBdev3", 00:08:04.974 "uuid": "5012df38-14e9-529e-be55-22ca7c340a0e", 00:08:04.974 "is_configured": true, 00:08:04.974 "data_offset": 2048, 00:08:04.974 "data_size": 63488 00:08:04.974 } 00:08:04.974 ] 00:08:04.974 }' 00:08:04.974 04:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.974 04:57:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.234 04:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:05.234 04:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.234 04:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.234 [2024-12-14 04:57:16.028459] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:05.234 [2024-12-14 04:57:16.028497] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:05.234 [2024-12-14 04:57:16.030931] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.234 [2024-12-14 04:57:16.030990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.234 [2024-12-14 04:57:16.031027] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:05.234 [2024-12-14 04:57:16.031039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:05.234 { 00:08:05.234 "results": [ 00:08:05.234 { 00:08:05.234 "job": "raid_bdev1", 00:08:05.234 "core_mask": "0x1", 00:08:05.234 "workload": "randrw", 00:08:05.234 "percentage": 50, 00:08:05.234 "status": "finished", 00:08:05.234 "queue_depth": 1, 00:08:05.234 "io_size": 131072, 00:08:05.234 "runtime": 1.39681, 00:08:05.234 "iops": 17352.395816181157, 00:08:05.234 "mibps": 2169.0494770226446, 00:08:05.234 "io_failed": 1, 00:08:05.234 "io_timeout": 0, 00:08:05.234 "avg_latency_us": 79.9538436288842, 00:08:05.234 "min_latency_us": 24.482096069868994, 00:08:05.234 "max_latency_us": 1359.3711790393013 00:08:05.234 } 00:08:05.234 ], 00:08:05.234 "core_count": 1 00:08:05.234 } 00:08:05.234 04:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.234 04:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76638 00:08:05.234 04:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 76638 ']' 00:08:05.234 04:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 76638 00:08:05.234 04:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:05.234 04:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:05.234 04:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76638 00:08:05.234 killing process with pid 76638 00:08:05.234 04:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:05.234 04:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:05.234 04:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76638' 00:08:05.234 04:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 76638 00:08:05.234 [2024-12-14 04:57:16.079201] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:05.234 04:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 76638 00:08:05.235 [2024-12-14 04:57:16.105274] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:05.495 04:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:05.495 04:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YgVQMnpafs 00:08:05.495 04:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:05.495 04:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:05.495 04:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:05.495 04:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:05.495 04:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:05.495 04:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:05.495 00:08:05.495 real 0m3.292s 00:08:05.495 user 0m4.155s 00:08:05.495 sys 0m0.541s 00:08:05.495 04:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.495 04:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.495 ************************************ 00:08:05.495 END TEST raid_write_error_test 00:08:05.495 ************************************ 00:08:05.755 04:57:16 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:05.755 04:57:16 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:05.755 04:57:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:05.755 04:57:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.755 04:57:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:05.755 ************************************ 00:08:05.755 START TEST raid_state_function_test 00:08:05.755 ************************************ 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76765 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:05.755 Process raid pid: 76765 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76765' 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76765 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 76765 ']' 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.755 04:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.756 [2024-12-14 04:57:16.514690] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:05.756 [2024-12-14 04:57:16.514825] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.015 [2024-12-14 04:57:16.657305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.015 [2024-12-14 04:57:16.703481] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.015 [2024-12-14 04:57:16.745357] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.015 [2024-12-14 04:57:16.745406] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.585 [2024-12-14 04:57:17.342648] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:06.585 [2024-12-14 04:57:17.342715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:06.585 [2024-12-14 04:57:17.342741] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.585 [2024-12-14 04:57:17.342751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.585 [2024-12-14 04:57:17.342757] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:06.585 [2024-12-14 04:57:17.342768] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.585 "name": "Existed_Raid", 00:08:06.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.585 "strip_size_kb": 64, 00:08:06.585 "state": "configuring", 00:08:06.585 "raid_level": "concat", 00:08:06.585 "superblock": false, 00:08:06.585 "num_base_bdevs": 3, 00:08:06.585 "num_base_bdevs_discovered": 0, 00:08:06.585 "num_base_bdevs_operational": 3, 00:08:06.585 "base_bdevs_list": [ 00:08:06.585 { 00:08:06.585 "name": "BaseBdev1", 00:08:06.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.585 "is_configured": false, 00:08:06.585 "data_offset": 0, 00:08:06.585 "data_size": 0 00:08:06.585 }, 00:08:06.585 { 00:08:06.585 "name": "BaseBdev2", 00:08:06.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.585 "is_configured": false, 00:08:06.585 "data_offset": 0, 00:08:06.585 "data_size": 0 00:08:06.585 }, 00:08:06.585 { 00:08:06.585 "name": "BaseBdev3", 00:08:06.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.585 "is_configured": false, 00:08:06.585 "data_offset": 0, 00:08:06.585 "data_size": 0 00:08:06.585 } 00:08:06.585 ] 00:08:06.585 }' 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.585 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.155 [2024-12-14 04:57:17.789836] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:07.155 [2024-12-14 04:57:17.789887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.155 [2024-12-14 04:57:17.797854] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:07.155 [2024-12-14 04:57:17.797897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:07.155 [2024-12-14 04:57:17.797906] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:07.155 [2024-12-14 04:57:17.797915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:07.155 [2024-12-14 04:57:17.797921] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:07.155 [2024-12-14 04:57:17.797929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.155 [2024-12-14 04:57:17.814642] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:07.155 BaseBdev1 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.155 [ 00:08:07.155 { 00:08:07.155 "name": "BaseBdev1", 00:08:07.155 "aliases": [ 00:08:07.155 "c14cb782-97b2-4f94-9f1c-caba2f6b34a3" 00:08:07.155 ], 00:08:07.155 "product_name": "Malloc disk", 00:08:07.155 "block_size": 512, 00:08:07.155 "num_blocks": 65536, 00:08:07.155 "uuid": "c14cb782-97b2-4f94-9f1c-caba2f6b34a3", 00:08:07.155 "assigned_rate_limits": { 00:08:07.155 "rw_ios_per_sec": 0, 00:08:07.155 "rw_mbytes_per_sec": 0, 00:08:07.155 "r_mbytes_per_sec": 0, 00:08:07.155 "w_mbytes_per_sec": 0 00:08:07.155 }, 00:08:07.155 "claimed": true, 00:08:07.155 "claim_type": "exclusive_write", 00:08:07.155 "zoned": false, 00:08:07.155 "supported_io_types": { 00:08:07.155 "read": true, 00:08:07.155 "write": true, 00:08:07.155 "unmap": true, 00:08:07.155 "flush": true, 00:08:07.155 "reset": true, 00:08:07.155 "nvme_admin": false, 00:08:07.155 "nvme_io": false, 00:08:07.155 "nvme_io_md": false, 00:08:07.155 "write_zeroes": true, 00:08:07.155 "zcopy": true, 00:08:07.155 "get_zone_info": false, 00:08:07.155 "zone_management": false, 00:08:07.155 "zone_append": false, 00:08:07.155 "compare": false, 00:08:07.155 "compare_and_write": false, 00:08:07.155 "abort": true, 00:08:07.155 "seek_hole": false, 00:08:07.155 "seek_data": false, 00:08:07.155 "copy": true, 00:08:07.155 "nvme_iov_md": false 00:08:07.155 }, 00:08:07.155 "memory_domains": [ 00:08:07.155 { 00:08:07.155 "dma_device_id": "system", 00:08:07.155 "dma_device_type": 1 00:08:07.155 }, 00:08:07.155 { 00:08:07.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.155 "dma_device_type": 2 00:08:07.155 } 00:08:07.155 ], 00:08:07.155 "driver_specific": {} 00:08:07.155 } 00:08:07.155 ] 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.155 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.155 "name": "Existed_Raid", 00:08:07.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.155 "strip_size_kb": 64, 00:08:07.155 "state": "configuring", 00:08:07.155 "raid_level": "concat", 00:08:07.155 "superblock": false, 00:08:07.155 "num_base_bdevs": 3, 00:08:07.155 "num_base_bdevs_discovered": 1, 00:08:07.155 "num_base_bdevs_operational": 3, 00:08:07.155 "base_bdevs_list": [ 00:08:07.155 { 00:08:07.155 "name": "BaseBdev1", 00:08:07.155 "uuid": "c14cb782-97b2-4f94-9f1c-caba2f6b34a3", 00:08:07.155 "is_configured": true, 00:08:07.155 "data_offset": 0, 00:08:07.155 "data_size": 65536 00:08:07.155 }, 00:08:07.155 { 00:08:07.155 "name": "BaseBdev2", 00:08:07.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.155 "is_configured": false, 00:08:07.155 "data_offset": 0, 00:08:07.155 "data_size": 0 00:08:07.155 }, 00:08:07.155 { 00:08:07.155 "name": "BaseBdev3", 00:08:07.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.155 "is_configured": false, 00:08:07.155 "data_offset": 0, 00:08:07.156 "data_size": 0 00:08:07.156 } 00:08:07.156 ] 00:08:07.156 }' 00:08:07.156 04:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.156 04:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.415 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:07.415 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.415 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.415 [2024-12-14 04:57:18.293897] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:07.416 [2024-12-14 04:57:18.293946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.676 [2024-12-14 04:57:18.305898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:07.676 [2024-12-14 04:57:18.307729] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:07.676 [2024-12-14 04:57:18.307772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:07.676 [2024-12-14 04:57:18.307782] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:07.676 [2024-12-14 04:57:18.307808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.676 "name": "Existed_Raid", 00:08:07.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.676 "strip_size_kb": 64, 00:08:07.676 "state": "configuring", 00:08:07.676 "raid_level": "concat", 00:08:07.676 "superblock": false, 00:08:07.676 "num_base_bdevs": 3, 00:08:07.676 "num_base_bdevs_discovered": 1, 00:08:07.676 "num_base_bdevs_operational": 3, 00:08:07.676 "base_bdevs_list": [ 00:08:07.676 { 00:08:07.676 "name": "BaseBdev1", 00:08:07.676 "uuid": "c14cb782-97b2-4f94-9f1c-caba2f6b34a3", 00:08:07.676 "is_configured": true, 00:08:07.676 "data_offset": 0, 00:08:07.676 "data_size": 65536 00:08:07.676 }, 00:08:07.676 { 00:08:07.676 "name": "BaseBdev2", 00:08:07.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.676 "is_configured": false, 00:08:07.676 "data_offset": 0, 00:08:07.676 "data_size": 0 00:08:07.676 }, 00:08:07.676 { 00:08:07.676 "name": "BaseBdev3", 00:08:07.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.676 "is_configured": false, 00:08:07.676 "data_offset": 0, 00:08:07.676 "data_size": 0 00:08:07.676 } 00:08:07.676 ] 00:08:07.676 }' 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.676 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.936 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:07.936 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.936 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.936 [2024-12-14 04:57:18.796564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.936 BaseBdev2 00:08:07.936 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.936 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:07.936 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:07.936 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:07.936 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:07.936 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:07.936 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:07.936 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:07.936 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.936 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.936 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.936 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:07.936 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.936 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.196 [ 00:08:08.196 { 00:08:08.196 "name": "BaseBdev2", 00:08:08.196 "aliases": [ 00:08:08.196 "9eeedbbf-eef7-4b88-ae14-61ae5cf3d3b8" 00:08:08.196 ], 00:08:08.196 "product_name": "Malloc disk", 00:08:08.196 "block_size": 512, 00:08:08.196 "num_blocks": 65536, 00:08:08.196 "uuid": "9eeedbbf-eef7-4b88-ae14-61ae5cf3d3b8", 00:08:08.196 "assigned_rate_limits": { 00:08:08.196 "rw_ios_per_sec": 0, 00:08:08.196 "rw_mbytes_per_sec": 0, 00:08:08.196 "r_mbytes_per_sec": 0, 00:08:08.196 "w_mbytes_per_sec": 0 00:08:08.196 }, 00:08:08.196 "claimed": true, 00:08:08.196 "claim_type": "exclusive_write", 00:08:08.196 "zoned": false, 00:08:08.196 "supported_io_types": { 00:08:08.196 "read": true, 00:08:08.196 "write": true, 00:08:08.196 "unmap": true, 00:08:08.196 "flush": true, 00:08:08.196 "reset": true, 00:08:08.196 "nvme_admin": false, 00:08:08.196 "nvme_io": false, 00:08:08.196 "nvme_io_md": false, 00:08:08.196 "write_zeroes": true, 00:08:08.196 "zcopy": true, 00:08:08.196 "get_zone_info": false, 00:08:08.196 "zone_management": false, 00:08:08.196 "zone_append": false, 00:08:08.196 "compare": false, 00:08:08.196 "compare_and_write": false, 00:08:08.196 "abort": true, 00:08:08.196 "seek_hole": false, 00:08:08.196 "seek_data": false, 00:08:08.196 "copy": true, 00:08:08.196 "nvme_iov_md": false 00:08:08.196 }, 00:08:08.196 "memory_domains": [ 00:08:08.196 { 00:08:08.196 "dma_device_id": "system", 00:08:08.196 "dma_device_type": 1 00:08:08.196 }, 00:08:08.196 { 00:08:08.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.196 "dma_device_type": 2 00:08:08.196 } 00:08:08.196 ], 00:08:08.196 "driver_specific": {} 00:08:08.196 } 00:08:08.196 ] 00:08:08.196 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.196 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:08.196 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:08.196 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:08.196 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:08.196 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.196 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.196 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:08.196 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.196 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.196 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.196 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.196 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.196 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.196 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.196 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.196 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.196 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.196 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.196 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.196 "name": "Existed_Raid", 00:08:08.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.196 "strip_size_kb": 64, 00:08:08.196 "state": "configuring", 00:08:08.196 "raid_level": "concat", 00:08:08.196 "superblock": false, 00:08:08.196 "num_base_bdevs": 3, 00:08:08.196 "num_base_bdevs_discovered": 2, 00:08:08.196 "num_base_bdevs_operational": 3, 00:08:08.197 "base_bdevs_list": [ 00:08:08.197 { 00:08:08.197 "name": "BaseBdev1", 00:08:08.197 "uuid": "c14cb782-97b2-4f94-9f1c-caba2f6b34a3", 00:08:08.197 "is_configured": true, 00:08:08.197 "data_offset": 0, 00:08:08.197 "data_size": 65536 00:08:08.197 }, 00:08:08.197 { 00:08:08.197 "name": "BaseBdev2", 00:08:08.197 "uuid": "9eeedbbf-eef7-4b88-ae14-61ae5cf3d3b8", 00:08:08.197 "is_configured": true, 00:08:08.197 "data_offset": 0, 00:08:08.197 "data_size": 65536 00:08:08.197 }, 00:08:08.197 { 00:08:08.197 "name": "BaseBdev3", 00:08:08.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.197 "is_configured": false, 00:08:08.197 "data_offset": 0, 00:08:08.197 "data_size": 0 00:08:08.197 } 00:08:08.197 ] 00:08:08.197 }' 00:08:08.197 04:57:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.197 04:57:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.457 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:08.457 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.457 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.457 [2024-12-14 04:57:19.178974] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:08.457 [2024-12-14 04:57:19.179021] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:08.457 [2024-12-14 04:57:19.179057] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:08.457 [2024-12-14 04:57:19.179422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:08.457 [2024-12-14 04:57:19.179587] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:08.457 [2024-12-14 04:57:19.179607] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:08.457 [2024-12-14 04:57:19.179825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.457 BaseBdev3 00:08:08.457 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.457 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:08.457 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:08.457 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:08.457 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:08.457 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:08.457 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:08.457 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:08.457 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.457 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.457 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.458 [ 00:08:08.458 { 00:08:08.458 "name": "BaseBdev3", 00:08:08.458 "aliases": [ 00:08:08.458 "caaaba7c-b6e8-4e91-8ec3-bc1b59099099" 00:08:08.458 ], 00:08:08.458 "product_name": "Malloc disk", 00:08:08.458 "block_size": 512, 00:08:08.458 "num_blocks": 65536, 00:08:08.458 "uuid": "caaaba7c-b6e8-4e91-8ec3-bc1b59099099", 00:08:08.458 "assigned_rate_limits": { 00:08:08.458 "rw_ios_per_sec": 0, 00:08:08.458 "rw_mbytes_per_sec": 0, 00:08:08.458 "r_mbytes_per_sec": 0, 00:08:08.458 "w_mbytes_per_sec": 0 00:08:08.458 }, 00:08:08.458 "claimed": true, 00:08:08.458 "claim_type": "exclusive_write", 00:08:08.458 "zoned": false, 00:08:08.458 "supported_io_types": { 00:08:08.458 "read": true, 00:08:08.458 "write": true, 00:08:08.458 "unmap": true, 00:08:08.458 "flush": true, 00:08:08.458 "reset": true, 00:08:08.458 "nvme_admin": false, 00:08:08.458 "nvme_io": false, 00:08:08.458 "nvme_io_md": false, 00:08:08.458 "write_zeroes": true, 00:08:08.458 "zcopy": true, 00:08:08.458 "get_zone_info": false, 00:08:08.458 "zone_management": false, 00:08:08.458 "zone_append": false, 00:08:08.458 "compare": false, 00:08:08.458 "compare_and_write": false, 00:08:08.458 "abort": true, 00:08:08.458 "seek_hole": false, 00:08:08.458 "seek_data": false, 00:08:08.458 "copy": true, 00:08:08.458 "nvme_iov_md": false 00:08:08.458 }, 00:08:08.458 "memory_domains": [ 00:08:08.458 { 00:08:08.458 "dma_device_id": "system", 00:08:08.458 "dma_device_type": 1 00:08:08.458 }, 00:08:08.458 { 00:08:08.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.458 "dma_device_type": 2 00:08:08.458 } 00:08:08.458 ], 00:08:08.458 "driver_specific": {} 00:08:08.458 } 00:08:08.458 ] 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.458 "name": "Existed_Raid", 00:08:08.458 "uuid": "f934ba1d-0f09-482d-9b3e-ec181432812c", 00:08:08.458 "strip_size_kb": 64, 00:08:08.458 "state": "online", 00:08:08.458 "raid_level": "concat", 00:08:08.458 "superblock": false, 00:08:08.458 "num_base_bdevs": 3, 00:08:08.458 "num_base_bdevs_discovered": 3, 00:08:08.458 "num_base_bdevs_operational": 3, 00:08:08.458 "base_bdevs_list": [ 00:08:08.458 { 00:08:08.458 "name": "BaseBdev1", 00:08:08.458 "uuid": "c14cb782-97b2-4f94-9f1c-caba2f6b34a3", 00:08:08.458 "is_configured": true, 00:08:08.458 "data_offset": 0, 00:08:08.458 "data_size": 65536 00:08:08.458 }, 00:08:08.458 { 00:08:08.458 "name": "BaseBdev2", 00:08:08.458 "uuid": "9eeedbbf-eef7-4b88-ae14-61ae5cf3d3b8", 00:08:08.458 "is_configured": true, 00:08:08.458 "data_offset": 0, 00:08:08.458 "data_size": 65536 00:08:08.458 }, 00:08:08.458 { 00:08:08.458 "name": "BaseBdev3", 00:08:08.458 "uuid": "caaaba7c-b6e8-4e91-8ec3-bc1b59099099", 00:08:08.458 "is_configured": true, 00:08:08.458 "data_offset": 0, 00:08:08.458 "data_size": 65536 00:08:08.458 } 00:08:08.458 ] 00:08:08.458 }' 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.458 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.027 [2024-12-14 04:57:19.682431] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:09.027 "name": "Existed_Raid", 00:08:09.027 "aliases": [ 00:08:09.027 "f934ba1d-0f09-482d-9b3e-ec181432812c" 00:08:09.027 ], 00:08:09.027 "product_name": "Raid Volume", 00:08:09.027 "block_size": 512, 00:08:09.027 "num_blocks": 196608, 00:08:09.027 "uuid": "f934ba1d-0f09-482d-9b3e-ec181432812c", 00:08:09.027 "assigned_rate_limits": { 00:08:09.027 "rw_ios_per_sec": 0, 00:08:09.027 "rw_mbytes_per_sec": 0, 00:08:09.027 "r_mbytes_per_sec": 0, 00:08:09.027 "w_mbytes_per_sec": 0 00:08:09.027 }, 00:08:09.027 "claimed": false, 00:08:09.027 "zoned": false, 00:08:09.027 "supported_io_types": { 00:08:09.027 "read": true, 00:08:09.027 "write": true, 00:08:09.027 "unmap": true, 00:08:09.027 "flush": true, 00:08:09.027 "reset": true, 00:08:09.027 "nvme_admin": false, 00:08:09.027 "nvme_io": false, 00:08:09.027 "nvme_io_md": false, 00:08:09.027 "write_zeroes": true, 00:08:09.027 "zcopy": false, 00:08:09.027 "get_zone_info": false, 00:08:09.027 "zone_management": false, 00:08:09.027 "zone_append": false, 00:08:09.027 "compare": false, 00:08:09.027 "compare_and_write": false, 00:08:09.027 "abort": false, 00:08:09.027 "seek_hole": false, 00:08:09.027 "seek_data": false, 00:08:09.027 "copy": false, 00:08:09.027 "nvme_iov_md": false 00:08:09.027 }, 00:08:09.027 "memory_domains": [ 00:08:09.027 { 00:08:09.027 "dma_device_id": "system", 00:08:09.027 "dma_device_type": 1 00:08:09.027 }, 00:08:09.027 { 00:08:09.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.027 "dma_device_type": 2 00:08:09.027 }, 00:08:09.027 { 00:08:09.027 "dma_device_id": "system", 00:08:09.027 "dma_device_type": 1 00:08:09.027 }, 00:08:09.027 { 00:08:09.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.027 "dma_device_type": 2 00:08:09.027 }, 00:08:09.027 { 00:08:09.027 "dma_device_id": "system", 00:08:09.027 "dma_device_type": 1 00:08:09.027 }, 00:08:09.027 { 00:08:09.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.027 "dma_device_type": 2 00:08:09.027 } 00:08:09.027 ], 00:08:09.027 "driver_specific": { 00:08:09.027 "raid": { 00:08:09.027 "uuid": "f934ba1d-0f09-482d-9b3e-ec181432812c", 00:08:09.027 "strip_size_kb": 64, 00:08:09.027 "state": "online", 00:08:09.027 "raid_level": "concat", 00:08:09.027 "superblock": false, 00:08:09.027 "num_base_bdevs": 3, 00:08:09.027 "num_base_bdevs_discovered": 3, 00:08:09.027 "num_base_bdevs_operational": 3, 00:08:09.027 "base_bdevs_list": [ 00:08:09.027 { 00:08:09.027 "name": "BaseBdev1", 00:08:09.027 "uuid": "c14cb782-97b2-4f94-9f1c-caba2f6b34a3", 00:08:09.027 "is_configured": true, 00:08:09.027 "data_offset": 0, 00:08:09.027 "data_size": 65536 00:08:09.027 }, 00:08:09.027 { 00:08:09.027 "name": "BaseBdev2", 00:08:09.027 "uuid": "9eeedbbf-eef7-4b88-ae14-61ae5cf3d3b8", 00:08:09.027 "is_configured": true, 00:08:09.027 "data_offset": 0, 00:08:09.027 "data_size": 65536 00:08:09.027 }, 00:08:09.027 { 00:08:09.027 "name": "BaseBdev3", 00:08:09.027 "uuid": "caaaba7c-b6e8-4e91-8ec3-bc1b59099099", 00:08:09.027 "is_configured": true, 00:08:09.027 "data_offset": 0, 00:08:09.027 "data_size": 65536 00:08:09.027 } 00:08:09.027 ] 00:08:09.027 } 00:08:09.027 } 00:08:09.027 }' 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:09.027 BaseBdev2 00:08:09.027 BaseBdev3' 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.027 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.287 [2024-12-14 04:57:19.941748] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:09.287 [2024-12-14 04:57:19.941779] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:09.287 [2024-12-14 04:57:19.941829] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.287 04:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.287 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.287 "name": "Existed_Raid", 00:08:09.287 "uuid": "f934ba1d-0f09-482d-9b3e-ec181432812c", 00:08:09.287 "strip_size_kb": 64, 00:08:09.287 "state": "offline", 00:08:09.287 "raid_level": "concat", 00:08:09.287 "superblock": false, 00:08:09.287 "num_base_bdevs": 3, 00:08:09.287 "num_base_bdevs_discovered": 2, 00:08:09.287 "num_base_bdevs_operational": 2, 00:08:09.287 "base_bdevs_list": [ 00:08:09.287 { 00:08:09.287 "name": null, 00:08:09.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.287 "is_configured": false, 00:08:09.287 "data_offset": 0, 00:08:09.287 "data_size": 65536 00:08:09.287 }, 00:08:09.287 { 00:08:09.288 "name": "BaseBdev2", 00:08:09.288 "uuid": "9eeedbbf-eef7-4b88-ae14-61ae5cf3d3b8", 00:08:09.288 "is_configured": true, 00:08:09.288 "data_offset": 0, 00:08:09.288 "data_size": 65536 00:08:09.288 }, 00:08:09.288 { 00:08:09.288 "name": "BaseBdev3", 00:08:09.288 "uuid": "caaaba7c-b6e8-4e91-8ec3-bc1b59099099", 00:08:09.288 "is_configured": true, 00:08:09.288 "data_offset": 0, 00:08:09.288 "data_size": 65536 00:08:09.288 } 00:08:09.288 ] 00:08:09.288 }' 00:08:09.288 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.288 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.547 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:09.547 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:09.547 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:09.547 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.547 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.547 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.547 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.807 [2024-12-14 04:57:20.440281] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.807 [2024-12-14 04:57:20.491334] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:09.807 [2024-12-14 04:57:20.491405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.807 BaseBdev2 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:09.807 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.808 [ 00:08:09.808 { 00:08:09.808 "name": "BaseBdev2", 00:08:09.808 "aliases": [ 00:08:09.808 "d16763bc-cf49-470f-8ea6-c3dcd8c31227" 00:08:09.808 ], 00:08:09.808 "product_name": "Malloc disk", 00:08:09.808 "block_size": 512, 00:08:09.808 "num_blocks": 65536, 00:08:09.808 "uuid": "d16763bc-cf49-470f-8ea6-c3dcd8c31227", 00:08:09.808 "assigned_rate_limits": { 00:08:09.808 "rw_ios_per_sec": 0, 00:08:09.808 "rw_mbytes_per_sec": 0, 00:08:09.808 "r_mbytes_per_sec": 0, 00:08:09.808 "w_mbytes_per_sec": 0 00:08:09.808 }, 00:08:09.808 "claimed": false, 00:08:09.808 "zoned": false, 00:08:09.808 "supported_io_types": { 00:08:09.808 "read": true, 00:08:09.808 "write": true, 00:08:09.808 "unmap": true, 00:08:09.808 "flush": true, 00:08:09.808 "reset": true, 00:08:09.808 "nvme_admin": false, 00:08:09.808 "nvme_io": false, 00:08:09.808 "nvme_io_md": false, 00:08:09.808 "write_zeroes": true, 00:08:09.808 "zcopy": true, 00:08:09.808 "get_zone_info": false, 00:08:09.808 "zone_management": false, 00:08:09.808 "zone_append": false, 00:08:09.808 "compare": false, 00:08:09.808 "compare_and_write": false, 00:08:09.808 "abort": true, 00:08:09.808 "seek_hole": false, 00:08:09.808 "seek_data": false, 00:08:09.808 "copy": true, 00:08:09.808 "nvme_iov_md": false 00:08:09.808 }, 00:08:09.808 "memory_domains": [ 00:08:09.808 { 00:08:09.808 "dma_device_id": "system", 00:08:09.808 "dma_device_type": 1 00:08:09.808 }, 00:08:09.808 { 00:08:09.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.808 "dma_device_type": 2 00:08:09.808 } 00:08:09.808 ], 00:08:09.808 "driver_specific": {} 00:08:09.808 } 00:08:09.808 ] 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.808 BaseBdev3 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.808 [ 00:08:09.808 { 00:08:09.808 "name": "BaseBdev3", 00:08:09.808 "aliases": [ 00:08:09.808 "17c7b1e9-e717-4526-8c3c-cffbf0b3a944" 00:08:09.808 ], 00:08:09.808 "product_name": "Malloc disk", 00:08:09.808 "block_size": 512, 00:08:09.808 "num_blocks": 65536, 00:08:09.808 "uuid": "17c7b1e9-e717-4526-8c3c-cffbf0b3a944", 00:08:09.808 "assigned_rate_limits": { 00:08:09.808 "rw_ios_per_sec": 0, 00:08:09.808 "rw_mbytes_per_sec": 0, 00:08:09.808 "r_mbytes_per_sec": 0, 00:08:09.808 "w_mbytes_per_sec": 0 00:08:09.808 }, 00:08:09.808 "claimed": false, 00:08:09.808 "zoned": false, 00:08:09.808 "supported_io_types": { 00:08:09.808 "read": true, 00:08:09.808 "write": true, 00:08:09.808 "unmap": true, 00:08:09.808 "flush": true, 00:08:09.808 "reset": true, 00:08:09.808 "nvme_admin": false, 00:08:09.808 "nvme_io": false, 00:08:09.808 "nvme_io_md": false, 00:08:09.808 "write_zeroes": true, 00:08:09.808 "zcopy": true, 00:08:09.808 "get_zone_info": false, 00:08:09.808 "zone_management": false, 00:08:09.808 "zone_append": false, 00:08:09.808 "compare": false, 00:08:09.808 "compare_and_write": false, 00:08:09.808 "abort": true, 00:08:09.808 "seek_hole": false, 00:08:09.808 "seek_data": false, 00:08:09.808 "copy": true, 00:08:09.808 "nvme_iov_md": false 00:08:09.808 }, 00:08:09.808 "memory_domains": [ 00:08:09.808 { 00:08:09.808 "dma_device_id": "system", 00:08:09.808 "dma_device_type": 1 00:08:09.808 }, 00:08:09.808 { 00:08:09.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.808 "dma_device_type": 2 00:08:09.808 } 00:08:09.808 ], 00:08:09.808 "driver_specific": {} 00:08:09.808 } 00:08:09.808 ] 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.808 [2024-12-14 04:57:20.661866] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.808 [2024-12-14 04:57:20.661925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.808 [2024-12-14 04:57:20.661945] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:09.808 [2024-12-14 04:57:20.663696] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.808 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.068 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.068 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.068 "name": "Existed_Raid", 00:08:10.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.068 "strip_size_kb": 64, 00:08:10.068 "state": "configuring", 00:08:10.068 "raid_level": "concat", 00:08:10.068 "superblock": false, 00:08:10.068 "num_base_bdevs": 3, 00:08:10.068 "num_base_bdevs_discovered": 2, 00:08:10.068 "num_base_bdevs_operational": 3, 00:08:10.068 "base_bdevs_list": [ 00:08:10.068 { 00:08:10.068 "name": "BaseBdev1", 00:08:10.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.068 "is_configured": false, 00:08:10.068 "data_offset": 0, 00:08:10.068 "data_size": 0 00:08:10.068 }, 00:08:10.068 { 00:08:10.068 "name": "BaseBdev2", 00:08:10.068 "uuid": "d16763bc-cf49-470f-8ea6-c3dcd8c31227", 00:08:10.068 "is_configured": true, 00:08:10.068 "data_offset": 0, 00:08:10.068 "data_size": 65536 00:08:10.068 }, 00:08:10.068 { 00:08:10.068 "name": "BaseBdev3", 00:08:10.068 "uuid": "17c7b1e9-e717-4526-8c3c-cffbf0b3a944", 00:08:10.068 "is_configured": true, 00:08:10.068 "data_offset": 0, 00:08:10.068 "data_size": 65536 00:08:10.068 } 00:08:10.068 ] 00:08:10.068 }' 00:08:10.068 04:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.068 04:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.327 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:10.327 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.327 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.328 [2024-12-14 04:57:21.089136] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:10.328 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.328 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:10.328 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.328 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.328 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.328 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.328 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.328 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.328 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.328 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.328 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.328 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.328 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.328 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.328 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.328 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.328 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.328 "name": "Existed_Raid", 00:08:10.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.328 "strip_size_kb": 64, 00:08:10.328 "state": "configuring", 00:08:10.328 "raid_level": "concat", 00:08:10.328 "superblock": false, 00:08:10.328 "num_base_bdevs": 3, 00:08:10.328 "num_base_bdevs_discovered": 1, 00:08:10.328 "num_base_bdevs_operational": 3, 00:08:10.328 "base_bdevs_list": [ 00:08:10.328 { 00:08:10.328 "name": "BaseBdev1", 00:08:10.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.328 "is_configured": false, 00:08:10.328 "data_offset": 0, 00:08:10.328 "data_size": 0 00:08:10.328 }, 00:08:10.328 { 00:08:10.328 "name": null, 00:08:10.328 "uuid": "d16763bc-cf49-470f-8ea6-c3dcd8c31227", 00:08:10.328 "is_configured": false, 00:08:10.328 "data_offset": 0, 00:08:10.328 "data_size": 65536 00:08:10.328 }, 00:08:10.328 { 00:08:10.328 "name": "BaseBdev3", 00:08:10.328 "uuid": "17c7b1e9-e717-4526-8c3c-cffbf0b3a944", 00:08:10.328 "is_configured": true, 00:08:10.328 "data_offset": 0, 00:08:10.328 "data_size": 65536 00:08:10.328 } 00:08:10.328 ] 00:08:10.328 }' 00:08:10.328 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.328 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.897 [2024-12-14 04:57:21.595261] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.897 BaseBdev1 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.897 [ 00:08:10.897 { 00:08:10.897 "name": "BaseBdev1", 00:08:10.897 "aliases": [ 00:08:10.897 "15ad5a22-9aed-4072-9424-8606179c4543" 00:08:10.897 ], 00:08:10.897 "product_name": "Malloc disk", 00:08:10.897 "block_size": 512, 00:08:10.897 "num_blocks": 65536, 00:08:10.897 "uuid": "15ad5a22-9aed-4072-9424-8606179c4543", 00:08:10.897 "assigned_rate_limits": { 00:08:10.897 "rw_ios_per_sec": 0, 00:08:10.897 "rw_mbytes_per_sec": 0, 00:08:10.897 "r_mbytes_per_sec": 0, 00:08:10.897 "w_mbytes_per_sec": 0 00:08:10.897 }, 00:08:10.897 "claimed": true, 00:08:10.897 "claim_type": "exclusive_write", 00:08:10.897 "zoned": false, 00:08:10.897 "supported_io_types": { 00:08:10.897 "read": true, 00:08:10.897 "write": true, 00:08:10.897 "unmap": true, 00:08:10.897 "flush": true, 00:08:10.897 "reset": true, 00:08:10.897 "nvme_admin": false, 00:08:10.897 "nvme_io": false, 00:08:10.897 "nvme_io_md": false, 00:08:10.897 "write_zeroes": true, 00:08:10.897 "zcopy": true, 00:08:10.897 "get_zone_info": false, 00:08:10.897 "zone_management": false, 00:08:10.897 "zone_append": false, 00:08:10.897 "compare": false, 00:08:10.897 "compare_and_write": false, 00:08:10.897 "abort": true, 00:08:10.897 "seek_hole": false, 00:08:10.897 "seek_data": false, 00:08:10.897 "copy": true, 00:08:10.897 "nvme_iov_md": false 00:08:10.897 }, 00:08:10.897 "memory_domains": [ 00:08:10.897 { 00:08:10.897 "dma_device_id": "system", 00:08:10.897 "dma_device_type": 1 00:08:10.897 }, 00:08:10.897 { 00:08:10.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.897 "dma_device_type": 2 00:08:10.897 } 00:08:10.897 ], 00:08:10.897 "driver_specific": {} 00:08:10.897 } 00:08:10.897 ] 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.897 "name": "Existed_Raid", 00:08:10.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.897 "strip_size_kb": 64, 00:08:10.897 "state": "configuring", 00:08:10.897 "raid_level": "concat", 00:08:10.897 "superblock": false, 00:08:10.897 "num_base_bdevs": 3, 00:08:10.897 "num_base_bdevs_discovered": 2, 00:08:10.897 "num_base_bdevs_operational": 3, 00:08:10.897 "base_bdevs_list": [ 00:08:10.897 { 00:08:10.897 "name": "BaseBdev1", 00:08:10.897 "uuid": "15ad5a22-9aed-4072-9424-8606179c4543", 00:08:10.897 "is_configured": true, 00:08:10.897 "data_offset": 0, 00:08:10.897 "data_size": 65536 00:08:10.897 }, 00:08:10.897 { 00:08:10.897 "name": null, 00:08:10.897 "uuid": "d16763bc-cf49-470f-8ea6-c3dcd8c31227", 00:08:10.897 "is_configured": false, 00:08:10.897 "data_offset": 0, 00:08:10.897 "data_size": 65536 00:08:10.897 }, 00:08:10.897 { 00:08:10.897 "name": "BaseBdev3", 00:08:10.897 "uuid": "17c7b1e9-e717-4526-8c3c-cffbf0b3a944", 00:08:10.897 "is_configured": true, 00:08:10.897 "data_offset": 0, 00:08:10.897 "data_size": 65536 00:08:10.897 } 00:08:10.897 ] 00:08:10.897 }' 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.897 04:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.467 [2024-12-14 04:57:22.082564] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.467 "name": "Existed_Raid", 00:08:11.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.467 "strip_size_kb": 64, 00:08:11.467 "state": "configuring", 00:08:11.467 "raid_level": "concat", 00:08:11.467 "superblock": false, 00:08:11.467 "num_base_bdevs": 3, 00:08:11.467 "num_base_bdevs_discovered": 1, 00:08:11.467 "num_base_bdevs_operational": 3, 00:08:11.467 "base_bdevs_list": [ 00:08:11.467 { 00:08:11.467 "name": "BaseBdev1", 00:08:11.467 "uuid": "15ad5a22-9aed-4072-9424-8606179c4543", 00:08:11.467 "is_configured": true, 00:08:11.467 "data_offset": 0, 00:08:11.467 "data_size": 65536 00:08:11.467 }, 00:08:11.467 { 00:08:11.467 "name": null, 00:08:11.467 "uuid": "d16763bc-cf49-470f-8ea6-c3dcd8c31227", 00:08:11.467 "is_configured": false, 00:08:11.467 "data_offset": 0, 00:08:11.467 "data_size": 65536 00:08:11.467 }, 00:08:11.467 { 00:08:11.467 "name": null, 00:08:11.467 "uuid": "17c7b1e9-e717-4526-8c3c-cffbf0b3a944", 00:08:11.467 "is_configured": false, 00:08:11.467 "data_offset": 0, 00:08:11.467 "data_size": 65536 00:08:11.467 } 00:08:11.467 ] 00:08:11.467 }' 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.467 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.727 [2024-12-14 04:57:22.485880] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.727 "name": "Existed_Raid", 00:08:11.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.727 "strip_size_kb": 64, 00:08:11.727 "state": "configuring", 00:08:11.727 "raid_level": "concat", 00:08:11.727 "superblock": false, 00:08:11.727 "num_base_bdevs": 3, 00:08:11.727 "num_base_bdevs_discovered": 2, 00:08:11.727 "num_base_bdevs_operational": 3, 00:08:11.727 "base_bdevs_list": [ 00:08:11.727 { 00:08:11.727 "name": "BaseBdev1", 00:08:11.727 "uuid": "15ad5a22-9aed-4072-9424-8606179c4543", 00:08:11.727 "is_configured": true, 00:08:11.727 "data_offset": 0, 00:08:11.727 "data_size": 65536 00:08:11.727 }, 00:08:11.727 { 00:08:11.727 "name": null, 00:08:11.727 "uuid": "d16763bc-cf49-470f-8ea6-c3dcd8c31227", 00:08:11.727 "is_configured": false, 00:08:11.727 "data_offset": 0, 00:08:11.727 "data_size": 65536 00:08:11.727 }, 00:08:11.727 { 00:08:11.727 "name": "BaseBdev3", 00:08:11.727 "uuid": "17c7b1e9-e717-4526-8c3c-cffbf0b3a944", 00:08:11.727 "is_configured": true, 00:08:11.727 "data_offset": 0, 00:08:11.727 "data_size": 65536 00:08:11.727 } 00:08:11.727 ] 00:08:11.727 }' 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.727 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.297 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:12.297 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.297 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.297 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.297 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.297 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:12.297 04:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:12.297 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.297 04:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.297 [2024-12-14 04:57:23.004995] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:12.297 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.297 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:12.297 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.297 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.297 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:12.297 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.297 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.297 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.297 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.297 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.297 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.297 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.297 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.297 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.297 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.297 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.297 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.297 "name": "Existed_Raid", 00:08:12.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.297 "strip_size_kb": 64, 00:08:12.297 "state": "configuring", 00:08:12.297 "raid_level": "concat", 00:08:12.297 "superblock": false, 00:08:12.297 "num_base_bdevs": 3, 00:08:12.297 "num_base_bdevs_discovered": 1, 00:08:12.297 "num_base_bdevs_operational": 3, 00:08:12.297 "base_bdevs_list": [ 00:08:12.297 { 00:08:12.297 "name": null, 00:08:12.297 "uuid": "15ad5a22-9aed-4072-9424-8606179c4543", 00:08:12.297 "is_configured": false, 00:08:12.297 "data_offset": 0, 00:08:12.297 "data_size": 65536 00:08:12.297 }, 00:08:12.297 { 00:08:12.297 "name": null, 00:08:12.297 "uuid": "d16763bc-cf49-470f-8ea6-c3dcd8c31227", 00:08:12.297 "is_configured": false, 00:08:12.297 "data_offset": 0, 00:08:12.297 "data_size": 65536 00:08:12.298 }, 00:08:12.298 { 00:08:12.298 "name": "BaseBdev3", 00:08:12.298 "uuid": "17c7b1e9-e717-4526-8c3c-cffbf0b3a944", 00:08:12.298 "is_configured": true, 00:08:12.298 "data_offset": 0, 00:08:12.298 "data_size": 65536 00:08:12.298 } 00:08:12.298 ] 00:08:12.298 }' 00:08:12.298 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.298 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.556 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.556 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:12.556 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.557 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.557 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.816 [2024-12-14 04:57:23.463120] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.816 "name": "Existed_Raid", 00:08:12.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.816 "strip_size_kb": 64, 00:08:12.816 "state": "configuring", 00:08:12.816 "raid_level": "concat", 00:08:12.816 "superblock": false, 00:08:12.816 "num_base_bdevs": 3, 00:08:12.816 "num_base_bdevs_discovered": 2, 00:08:12.816 "num_base_bdevs_operational": 3, 00:08:12.816 "base_bdevs_list": [ 00:08:12.816 { 00:08:12.816 "name": null, 00:08:12.816 "uuid": "15ad5a22-9aed-4072-9424-8606179c4543", 00:08:12.816 "is_configured": false, 00:08:12.816 "data_offset": 0, 00:08:12.816 "data_size": 65536 00:08:12.816 }, 00:08:12.816 { 00:08:12.816 "name": "BaseBdev2", 00:08:12.816 "uuid": "d16763bc-cf49-470f-8ea6-c3dcd8c31227", 00:08:12.816 "is_configured": true, 00:08:12.816 "data_offset": 0, 00:08:12.816 "data_size": 65536 00:08:12.816 }, 00:08:12.816 { 00:08:12.816 "name": "BaseBdev3", 00:08:12.816 "uuid": "17c7b1e9-e717-4526-8c3c-cffbf0b3a944", 00:08:12.816 "is_configured": true, 00:08:12.816 "data_offset": 0, 00:08:12.816 "data_size": 65536 00:08:12.816 } 00:08:12.816 ] 00:08:12.816 }' 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.816 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.075 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.075 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.075 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.075 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:13.075 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.075 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:13.075 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.075 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.075 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.075 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:13.075 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.075 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 15ad5a22-9aed-4072-9424-8606179c4543 00:08:13.075 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.075 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.335 [2024-12-14 04:57:23.965255] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:13.335 [2024-12-14 04:57:23.965300] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:13.335 [2024-12-14 04:57:23.965310] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:13.335 [2024-12-14 04:57:23.965557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:13.335 [2024-12-14 04:57:23.965674] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:13.335 [2024-12-14 04:57:23.965693] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:13.335 [2024-12-14 04:57:23.965891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.335 NewBaseBdev 00:08:13.335 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.335 04:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:13.335 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:13.335 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:13.335 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:13.335 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:13.336 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:13.336 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:13.336 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.336 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.336 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.336 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:13.336 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.336 04:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.336 [ 00:08:13.336 { 00:08:13.336 "name": "NewBaseBdev", 00:08:13.336 "aliases": [ 00:08:13.336 "15ad5a22-9aed-4072-9424-8606179c4543" 00:08:13.336 ], 00:08:13.336 "product_name": "Malloc disk", 00:08:13.336 "block_size": 512, 00:08:13.336 "num_blocks": 65536, 00:08:13.336 "uuid": "15ad5a22-9aed-4072-9424-8606179c4543", 00:08:13.336 "assigned_rate_limits": { 00:08:13.336 "rw_ios_per_sec": 0, 00:08:13.336 "rw_mbytes_per_sec": 0, 00:08:13.336 "r_mbytes_per_sec": 0, 00:08:13.336 "w_mbytes_per_sec": 0 00:08:13.336 }, 00:08:13.336 "claimed": true, 00:08:13.336 "claim_type": "exclusive_write", 00:08:13.336 "zoned": false, 00:08:13.336 "supported_io_types": { 00:08:13.336 "read": true, 00:08:13.336 "write": true, 00:08:13.336 "unmap": true, 00:08:13.336 "flush": true, 00:08:13.336 "reset": true, 00:08:13.336 "nvme_admin": false, 00:08:13.336 "nvme_io": false, 00:08:13.336 "nvme_io_md": false, 00:08:13.336 "write_zeroes": true, 00:08:13.336 "zcopy": true, 00:08:13.336 "get_zone_info": false, 00:08:13.336 "zone_management": false, 00:08:13.336 "zone_append": false, 00:08:13.336 "compare": false, 00:08:13.336 "compare_and_write": false, 00:08:13.336 "abort": true, 00:08:13.336 "seek_hole": false, 00:08:13.336 "seek_data": false, 00:08:13.336 "copy": true, 00:08:13.336 "nvme_iov_md": false 00:08:13.336 }, 00:08:13.336 "memory_domains": [ 00:08:13.336 { 00:08:13.336 "dma_device_id": "system", 00:08:13.336 "dma_device_type": 1 00:08:13.336 }, 00:08:13.336 { 00:08:13.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.336 "dma_device_type": 2 00:08:13.336 } 00:08:13.336 ], 00:08:13.336 "driver_specific": {} 00:08:13.336 } 00:08:13.336 ] 00:08:13.336 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.336 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:13.336 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:13.336 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.336 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.336 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:13.336 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.336 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.336 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.336 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.336 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.336 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.336 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.336 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.336 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.336 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.336 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.336 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.336 "name": "Existed_Raid", 00:08:13.336 "uuid": "336797af-9da4-4088-ae18-c71fc067f2b5", 00:08:13.336 "strip_size_kb": 64, 00:08:13.336 "state": "online", 00:08:13.336 "raid_level": "concat", 00:08:13.336 "superblock": false, 00:08:13.336 "num_base_bdevs": 3, 00:08:13.336 "num_base_bdevs_discovered": 3, 00:08:13.336 "num_base_bdevs_operational": 3, 00:08:13.336 "base_bdevs_list": [ 00:08:13.336 { 00:08:13.336 "name": "NewBaseBdev", 00:08:13.336 "uuid": "15ad5a22-9aed-4072-9424-8606179c4543", 00:08:13.336 "is_configured": true, 00:08:13.336 "data_offset": 0, 00:08:13.336 "data_size": 65536 00:08:13.336 }, 00:08:13.336 { 00:08:13.336 "name": "BaseBdev2", 00:08:13.336 "uuid": "d16763bc-cf49-470f-8ea6-c3dcd8c31227", 00:08:13.336 "is_configured": true, 00:08:13.336 "data_offset": 0, 00:08:13.336 "data_size": 65536 00:08:13.336 }, 00:08:13.336 { 00:08:13.336 "name": "BaseBdev3", 00:08:13.336 "uuid": "17c7b1e9-e717-4526-8c3c-cffbf0b3a944", 00:08:13.336 "is_configured": true, 00:08:13.336 "data_offset": 0, 00:08:13.336 "data_size": 65536 00:08:13.336 } 00:08:13.336 ] 00:08:13.336 }' 00:08:13.336 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.336 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.597 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:13.597 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:13.597 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:13.597 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:13.597 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:13.597 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:13.597 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:13.597 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.597 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.597 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:13.597 [2024-12-14 04:57:24.404819] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.597 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.597 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:13.597 "name": "Existed_Raid", 00:08:13.597 "aliases": [ 00:08:13.597 "336797af-9da4-4088-ae18-c71fc067f2b5" 00:08:13.597 ], 00:08:13.597 "product_name": "Raid Volume", 00:08:13.597 "block_size": 512, 00:08:13.597 "num_blocks": 196608, 00:08:13.597 "uuid": "336797af-9da4-4088-ae18-c71fc067f2b5", 00:08:13.597 "assigned_rate_limits": { 00:08:13.597 "rw_ios_per_sec": 0, 00:08:13.597 "rw_mbytes_per_sec": 0, 00:08:13.597 "r_mbytes_per_sec": 0, 00:08:13.597 "w_mbytes_per_sec": 0 00:08:13.597 }, 00:08:13.597 "claimed": false, 00:08:13.597 "zoned": false, 00:08:13.597 "supported_io_types": { 00:08:13.597 "read": true, 00:08:13.597 "write": true, 00:08:13.597 "unmap": true, 00:08:13.597 "flush": true, 00:08:13.597 "reset": true, 00:08:13.597 "nvme_admin": false, 00:08:13.597 "nvme_io": false, 00:08:13.597 "nvme_io_md": false, 00:08:13.597 "write_zeroes": true, 00:08:13.597 "zcopy": false, 00:08:13.597 "get_zone_info": false, 00:08:13.597 "zone_management": false, 00:08:13.597 "zone_append": false, 00:08:13.597 "compare": false, 00:08:13.597 "compare_and_write": false, 00:08:13.597 "abort": false, 00:08:13.597 "seek_hole": false, 00:08:13.597 "seek_data": false, 00:08:13.597 "copy": false, 00:08:13.597 "nvme_iov_md": false 00:08:13.597 }, 00:08:13.597 "memory_domains": [ 00:08:13.597 { 00:08:13.597 "dma_device_id": "system", 00:08:13.597 "dma_device_type": 1 00:08:13.597 }, 00:08:13.597 { 00:08:13.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.597 "dma_device_type": 2 00:08:13.597 }, 00:08:13.597 { 00:08:13.597 "dma_device_id": "system", 00:08:13.597 "dma_device_type": 1 00:08:13.597 }, 00:08:13.597 { 00:08:13.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.597 "dma_device_type": 2 00:08:13.597 }, 00:08:13.597 { 00:08:13.597 "dma_device_id": "system", 00:08:13.597 "dma_device_type": 1 00:08:13.597 }, 00:08:13.597 { 00:08:13.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.597 "dma_device_type": 2 00:08:13.597 } 00:08:13.597 ], 00:08:13.597 "driver_specific": { 00:08:13.597 "raid": { 00:08:13.597 "uuid": "336797af-9da4-4088-ae18-c71fc067f2b5", 00:08:13.597 "strip_size_kb": 64, 00:08:13.597 "state": "online", 00:08:13.597 "raid_level": "concat", 00:08:13.597 "superblock": false, 00:08:13.597 "num_base_bdevs": 3, 00:08:13.597 "num_base_bdevs_discovered": 3, 00:08:13.597 "num_base_bdevs_operational": 3, 00:08:13.597 "base_bdevs_list": [ 00:08:13.597 { 00:08:13.597 "name": "NewBaseBdev", 00:08:13.597 "uuid": "15ad5a22-9aed-4072-9424-8606179c4543", 00:08:13.597 "is_configured": true, 00:08:13.597 "data_offset": 0, 00:08:13.597 "data_size": 65536 00:08:13.597 }, 00:08:13.597 { 00:08:13.597 "name": "BaseBdev2", 00:08:13.597 "uuid": "d16763bc-cf49-470f-8ea6-c3dcd8c31227", 00:08:13.597 "is_configured": true, 00:08:13.597 "data_offset": 0, 00:08:13.597 "data_size": 65536 00:08:13.597 }, 00:08:13.597 { 00:08:13.597 "name": "BaseBdev3", 00:08:13.597 "uuid": "17c7b1e9-e717-4526-8c3c-cffbf0b3a944", 00:08:13.597 "is_configured": true, 00:08:13.597 "data_offset": 0, 00:08:13.597 "data_size": 65536 00:08:13.597 } 00:08:13.597 ] 00:08:13.597 } 00:08:13.597 } 00:08:13.597 }' 00:08:13.597 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:13.597 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:13.597 BaseBdev2 00:08:13.597 BaseBdev3' 00:08:13.597 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.858 [2024-12-14 04:57:24.656117] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:13.858 [2024-12-14 04:57:24.656146] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.858 [2024-12-14 04:57:24.656223] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.858 [2024-12-14 04:57:24.656276] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.858 [2024-12-14 04:57:24.656296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76765 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 76765 ']' 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 76765 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76765 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:13.858 killing process with pid 76765 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76765' 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 76765 00:08:13.858 [2024-12-14 04:57:24.706740] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:13.858 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 76765 00:08:13.858 [2024-12-14 04:57:24.736631] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:14.117 04:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:14.117 00:08:14.118 real 0m8.558s 00:08:14.118 user 0m14.637s 00:08:14.118 sys 0m1.666s 00:08:14.118 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.118 04:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.118 ************************************ 00:08:14.118 END TEST raid_state_function_test 00:08:14.118 ************************************ 00:08:14.381 04:57:25 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:14.381 04:57:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:14.381 04:57:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.381 04:57:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:14.381 ************************************ 00:08:14.381 START TEST raid_state_function_test_sb 00:08:14.381 ************************************ 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77370 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77370' 00:08:14.381 Process raid pid: 77370 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77370 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77370 ']' 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:14.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:14.381 04:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.381 [2024-12-14 04:57:25.145383] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:14.381 [2024-12-14 04:57:25.145519] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.642 [2024-12-14 04:57:25.305216] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.642 [2024-12-14 04:57:25.349763] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.642 [2024-12-14 04:57:25.391579] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.642 [2024-12-14 04:57:25.391621] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.209 04:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:15.209 04:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:15.209 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:15.209 04:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.209 04:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.209 [2024-12-14 04:57:25.973038] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:15.209 [2024-12-14 04:57:25.973104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:15.209 [2024-12-14 04:57:25.973117] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:15.209 [2024-12-14 04:57:25.973126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:15.209 [2024-12-14 04:57:25.973132] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:15.209 [2024-12-14 04:57:25.973145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:15.209 04:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.209 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:15.209 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.209 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.209 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:15.209 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.209 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.209 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.209 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.209 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.209 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.209 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.209 04:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.209 04:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.209 04:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.209 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.209 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.209 "name": "Existed_Raid", 00:08:15.209 "uuid": "31a52de0-83b6-48f6-a60a-73b503f54e3a", 00:08:15.209 "strip_size_kb": 64, 00:08:15.209 "state": "configuring", 00:08:15.209 "raid_level": "concat", 00:08:15.209 "superblock": true, 00:08:15.209 "num_base_bdevs": 3, 00:08:15.210 "num_base_bdevs_discovered": 0, 00:08:15.210 "num_base_bdevs_operational": 3, 00:08:15.210 "base_bdevs_list": [ 00:08:15.210 { 00:08:15.210 "name": "BaseBdev1", 00:08:15.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.210 "is_configured": false, 00:08:15.210 "data_offset": 0, 00:08:15.210 "data_size": 0 00:08:15.210 }, 00:08:15.210 { 00:08:15.210 "name": "BaseBdev2", 00:08:15.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.210 "is_configured": false, 00:08:15.210 "data_offset": 0, 00:08:15.210 "data_size": 0 00:08:15.210 }, 00:08:15.210 { 00:08:15.210 "name": "BaseBdev3", 00:08:15.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.210 "is_configured": false, 00:08:15.210 "data_offset": 0, 00:08:15.210 "data_size": 0 00:08:15.210 } 00:08:15.210 ] 00:08:15.210 }' 00:08:15.210 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.210 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.779 [2024-12-14 04:57:26.404239] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:15.779 [2024-12-14 04:57:26.404285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.779 [2024-12-14 04:57:26.412264] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:15.779 [2024-12-14 04:57:26.412321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:15.779 [2024-12-14 04:57:26.412330] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:15.779 [2024-12-14 04:57:26.412338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:15.779 [2024-12-14 04:57:26.412344] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:15.779 [2024-12-14 04:57:26.412353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.779 [2024-12-14 04:57:26.428979] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:15.779 BaseBdev1 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.779 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.779 [ 00:08:15.779 { 00:08:15.779 "name": "BaseBdev1", 00:08:15.779 "aliases": [ 00:08:15.779 "d2481cb9-8137-4499-9d3c-d76b8ad28e53" 00:08:15.779 ], 00:08:15.779 "product_name": "Malloc disk", 00:08:15.779 "block_size": 512, 00:08:15.779 "num_blocks": 65536, 00:08:15.779 "uuid": "d2481cb9-8137-4499-9d3c-d76b8ad28e53", 00:08:15.779 "assigned_rate_limits": { 00:08:15.779 "rw_ios_per_sec": 0, 00:08:15.779 "rw_mbytes_per_sec": 0, 00:08:15.779 "r_mbytes_per_sec": 0, 00:08:15.779 "w_mbytes_per_sec": 0 00:08:15.779 }, 00:08:15.779 "claimed": true, 00:08:15.779 "claim_type": "exclusive_write", 00:08:15.779 "zoned": false, 00:08:15.779 "supported_io_types": { 00:08:15.779 "read": true, 00:08:15.779 "write": true, 00:08:15.779 "unmap": true, 00:08:15.779 "flush": true, 00:08:15.779 "reset": true, 00:08:15.779 "nvme_admin": false, 00:08:15.779 "nvme_io": false, 00:08:15.779 "nvme_io_md": false, 00:08:15.779 "write_zeroes": true, 00:08:15.779 "zcopy": true, 00:08:15.780 "get_zone_info": false, 00:08:15.780 "zone_management": false, 00:08:15.780 "zone_append": false, 00:08:15.780 "compare": false, 00:08:15.780 "compare_and_write": false, 00:08:15.780 "abort": true, 00:08:15.780 "seek_hole": false, 00:08:15.780 "seek_data": false, 00:08:15.780 "copy": true, 00:08:15.780 "nvme_iov_md": false 00:08:15.780 }, 00:08:15.780 "memory_domains": [ 00:08:15.780 { 00:08:15.780 "dma_device_id": "system", 00:08:15.780 "dma_device_type": 1 00:08:15.780 }, 00:08:15.780 { 00:08:15.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.780 "dma_device_type": 2 00:08:15.780 } 00:08:15.780 ], 00:08:15.780 "driver_specific": {} 00:08:15.780 } 00:08:15.780 ] 00:08:15.780 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.780 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:15.780 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:15.780 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.780 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.780 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:15.780 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.780 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.780 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.780 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.780 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.780 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.780 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.780 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.780 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.780 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.780 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.780 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.780 "name": "Existed_Raid", 00:08:15.780 "uuid": "65377463-634c-4025-b54a-aa2410014b83", 00:08:15.780 "strip_size_kb": 64, 00:08:15.780 "state": "configuring", 00:08:15.780 "raid_level": "concat", 00:08:15.780 "superblock": true, 00:08:15.780 "num_base_bdevs": 3, 00:08:15.780 "num_base_bdevs_discovered": 1, 00:08:15.780 "num_base_bdevs_operational": 3, 00:08:15.780 "base_bdevs_list": [ 00:08:15.780 { 00:08:15.780 "name": "BaseBdev1", 00:08:15.780 "uuid": "d2481cb9-8137-4499-9d3c-d76b8ad28e53", 00:08:15.780 "is_configured": true, 00:08:15.780 "data_offset": 2048, 00:08:15.780 "data_size": 63488 00:08:15.780 }, 00:08:15.780 { 00:08:15.780 "name": "BaseBdev2", 00:08:15.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.780 "is_configured": false, 00:08:15.780 "data_offset": 0, 00:08:15.780 "data_size": 0 00:08:15.780 }, 00:08:15.780 { 00:08:15.780 "name": "BaseBdev3", 00:08:15.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.780 "is_configured": false, 00:08:15.780 "data_offset": 0, 00:08:15.780 "data_size": 0 00:08:15.780 } 00:08:15.780 ] 00:08:15.780 }' 00:08:15.780 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.780 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.040 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.040 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.040 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.040 [2024-12-14 04:57:26.864289] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.040 [2024-12-14 04:57:26.864334] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:16.040 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.040 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:16.040 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.040 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.040 [2024-12-14 04:57:26.876366] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.040 [2024-12-14 04:57:26.878147] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.040 [2024-12-14 04:57:26.878219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.040 [2024-12-14 04:57:26.878230] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:16.040 [2024-12-14 04:57:26.878240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:16.040 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.040 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:16.040 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:16.040 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:16.040 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.040 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.040 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.040 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.040 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.040 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.040 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.041 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.041 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.041 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.041 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.041 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.041 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.041 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.300 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.300 "name": "Existed_Raid", 00:08:16.300 "uuid": "9e8f530e-6fda-44ca-8b66-44a2cbda6b4e", 00:08:16.300 "strip_size_kb": 64, 00:08:16.300 "state": "configuring", 00:08:16.300 "raid_level": "concat", 00:08:16.300 "superblock": true, 00:08:16.300 "num_base_bdevs": 3, 00:08:16.300 "num_base_bdevs_discovered": 1, 00:08:16.300 "num_base_bdevs_operational": 3, 00:08:16.300 "base_bdevs_list": [ 00:08:16.300 { 00:08:16.300 "name": "BaseBdev1", 00:08:16.300 "uuid": "d2481cb9-8137-4499-9d3c-d76b8ad28e53", 00:08:16.300 "is_configured": true, 00:08:16.300 "data_offset": 2048, 00:08:16.300 "data_size": 63488 00:08:16.300 }, 00:08:16.300 { 00:08:16.300 "name": "BaseBdev2", 00:08:16.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.300 "is_configured": false, 00:08:16.300 "data_offset": 0, 00:08:16.300 "data_size": 0 00:08:16.300 }, 00:08:16.300 { 00:08:16.300 "name": "BaseBdev3", 00:08:16.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.300 "is_configured": false, 00:08:16.300 "data_offset": 0, 00:08:16.300 "data_size": 0 00:08:16.300 } 00:08:16.300 ] 00:08:16.300 }' 00:08:16.300 04:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.300 04:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.560 [2024-12-14 04:57:27.353548] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:16.560 BaseBdev2 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.560 [ 00:08:16.560 { 00:08:16.560 "name": "BaseBdev2", 00:08:16.560 "aliases": [ 00:08:16.560 "bf4b96b7-7e20-417c-9564-c60c18734152" 00:08:16.560 ], 00:08:16.560 "product_name": "Malloc disk", 00:08:16.560 "block_size": 512, 00:08:16.560 "num_blocks": 65536, 00:08:16.560 "uuid": "bf4b96b7-7e20-417c-9564-c60c18734152", 00:08:16.560 "assigned_rate_limits": { 00:08:16.560 "rw_ios_per_sec": 0, 00:08:16.560 "rw_mbytes_per_sec": 0, 00:08:16.560 "r_mbytes_per_sec": 0, 00:08:16.560 "w_mbytes_per_sec": 0 00:08:16.560 }, 00:08:16.560 "claimed": true, 00:08:16.560 "claim_type": "exclusive_write", 00:08:16.560 "zoned": false, 00:08:16.560 "supported_io_types": { 00:08:16.560 "read": true, 00:08:16.560 "write": true, 00:08:16.560 "unmap": true, 00:08:16.560 "flush": true, 00:08:16.560 "reset": true, 00:08:16.560 "nvme_admin": false, 00:08:16.560 "nvme_io": false, 00:08:16.560 "nvme_io_md": false, 00:08:16.560 "write_zeroes": true, 00:08:16.560 "zcopy": true, 00:08:16.560 "get_zone_info": false, 00:08:16.560 "zone_management": false, 00:08:16.560 "zone_append": false, 00:08:16.560 "compare": false, 00:08:16.560 "compare_and_write": false, 00:08:16.560 "abort": true, 00:08:16.560 "seek_hole": false, 00:08:16.560 "seek_data": false, 00:08:16.560 "copy": true, 00:08:16.560 "nvme_iov_md": false 00:08:16.560 }, 00:08:16.560 "memory_domains": [ 00:08:16.560 { 00:08:16.560 "dma_device_id": "system", 00:08:16.560 "dma_device_type": 1 00:08:16.560 }, 00:08:16.560 { 00:08:16.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.560 "dma_device_type": 2 00:08:16.560 } 00:08:16.560 ], 00:08:16.560 "driver_specific": {} 00:08:16.560 } 00:08:16.560 ] 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.560 "name": "Existed_Raid", 00:08:16.560 "uuid": "9e8f530e-6fda-44ca-8b66-44a2cbda6b4e", 00:08:16.560 "strip_size_kb": 64, 00:08:16.560 "state": "configuring", 00:08:16.560 "raid_level": "concat", 00:08:16.560 "superblock": true, 00:08:16.560 "num_base_bdevs": 3, 00:08:16.560 "num_base_bdevs_discovered": 2, 00:08:16.560 "num_base_bdevs_operational": 3, 00:08:16.560 "base_bdevs_list": [ 00:08:16.560 { 00:08:16.560 "name": "BaseBdev1", 00:08:16.560 "uuid": "d2481cb9-8137-4499-9d3c-d76b8ad28e53", 00:08:16.560 "is_configured": true, 00:08:16.560 "data_offset": 2048, 00:08:16.560 "data_size": 63488 00:08:16.560 }, 00:08:16.560 { 00:08:16.560 "name": "BaseBdev2", 00:08:16.560 "uuid": "bf4b96b7-7e20-417c-9564-c60c18734152", 00:08:16.560 "is_configured": true, 00:08:16.560 "data_offset": 2048, 00:08:16.560 "data_size": 63488 00:08:16.560 }, 00:08:16.560 { 00:08:16.560 "name": "BaseBdev3", 00:08:16.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.560 "is_configured": false, 00:08:16.560 "data_offset": 0, 00:08:16.560 "data_size": 0 00:08:16.560 } 00:08:16.560 ] 00:08:16.560 }' 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.560 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.128 [2024-12-14 04:57:27.795726] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:17.128 [2024-12-14 04:57:27.795912] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:17.128 [2024-12-14 04:57:27.795932] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:17.128 [2024-12-14 04:57:27.796233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:17.128 [2024-12-14 04:57:27.796352] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:17.128 BaseBdev3 00:08:17.128 [2024-12-14 04:57:27.796361] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:17.128 [2024-12-14 04:57:27.796478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.128 [ 00:08:17.128 { 00:08:17.128 "name": "BaseBdev3", 00:08:17.128 "aliases": [ 00:08:17.128 "67e7ea98-88ee-4d19-bd00-8f60dcb861e7" 00:08:17.128 ], 00:08:17.128 "product_name": "Malloc disk", 00:08:17.128 "block_size": 512, 00:08:17.128 "num_blocks": 65536, 00:08:17.128 "uuid": "67e7ea98-88ee-4d19-bd00-8f60dcb861e7", 00:08:17.128 "assigned_rate_limits": { 00:08:17.128 "rw_ios_per_sec": 0, 00:08:17.128 "rw_mbytes_per_sec": 0, 00:08:17.128 "r_mbytes_per_sec": 0, 00:08:17.128 "w_mbytes_per_sec": 0 00:08:17.128 }, 00:08:17.128 "claimed": true, 00:08:17.128 "claim_type": "exclusive_write", 00:08:17.128 "zoned": false, 00:08:17.128 "supported_io_types": { 00:08:17.128 "read": true, 00:08:17.128 "write": true, 00:08:17.128 "unmap": true, 00:08:17.128 "flush": true, 00:08:17.128 "reset": true, 00:08:17.128 "nvme_admin": false, 00:08:17.128 "nvme_io": false, 00:08:17.128 "nvme_io_md": false, 00:08:17.128 "write_zeroes": true, 00:08:17.128 "zcopy": true, 00:08:17.128 "get_zone_info": false, 00:08:17.128 "zone_management": false, 00:08:17.128 "zone_append": false, 00:08:17.128 "compare": false, 00:08:17.128 "compare_and_write": false, 00:08:17.128 "abort": true, 00:08:17.128 "seek_hole": false, 00:08:17.128 "seek_data": false, 00:08:17.128 "copy": true, 00:08:17.128 "nvme_iov_md": false 00:08:17.128 }, 00:08:17.128 "memory_domains": [ 00:08:17.128 { 00:08:17.128 "dma_device_id": "system", 00:08:17.128 "dma_device_type": 1 00:08:17.128 }, 00:08:17.128 { 00:08:17.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.128 "dma_device_type": 2 00:08:17.128 } 00:08:17.128 ], 00:08:17.128 "driver_specific": {} 00:08:17.128 } 00:08:17.128 ] 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.128 "name": "Existed_Raid", 00:08:17.128 "uuid": "9e8f530e-6fda-44ca-8b66-44a2cbda6b4e", 00:08:17.128 "strip_size_kb": 64, 00:08:17.128 "state": "online", 00:08:17.128 "raid_level": "concat", 00:08:17.128 "superblock": true, 00:08:17.128 "num_base_bdevs": 3, 00:08:17.128 "num_base_bdevs_discovered": 3, 00:08:17.128 "num_base_bdevs_operational": 3, 00:08:17.128 "base_bdevs_list": [ 00:08:17.128 { 00:08:17.128 "name": "BaseBdev1", 00:08:17.128 "uuid": "d2481cb9-8137-4499-9d3c-d76b8ad28e53", 00:08:17.128 "is_configured": true, 00:08:17.128 "data_offset": 2048, 00:08:17.128 "data_size": 63488 00:08:17.128 }, 00:08:17.128 { 00:08:17.128 "name": "BaseBdev2", 00:08:17.128 "uuid": "bf4b96b7-7e20-417c-9564-c60c18734152", 00:08:17.128 "is_configured": true, 00:08:17.128 "data_offset": 2048, 00:08:17.128 "data_size": 63488 00:08:17.128 }, 00:08:17.128 { 00:08:17.128 "name": "BaseBdev3", 00:08:17.128 "uuid": "67e7ea98-88ee-4d19-bd00-8f60dcb861e7", 00:08:17.128 "is_configured": true, 00:08:17.128 "data_offset": 2048, 00:08:17.128 "data_size": 63488 00:08:17.128 } 00:08:17.128 ] 00:08:17.128 }' 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.128 04:57:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.388 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:17.388 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:17.388 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:17.388 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:17.388 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:17.388 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:17.388 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:17.388 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.388 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.388 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:17.388 [2024-12-14 04:57:28.247346] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.388 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:17.647 "name": "Existed_Raid", 00:08:17.647 "aliases": [ 00:08:17.647 "9e8f530e-6fda-44ca-8b66-44a2cbda6b4e" 00:08:17.647 ], 00:08:17.647 "product_name": "Raid Volume", 00:08:17.647 "block_size": 512, 00:08:17.647 "num_blocks": 190464, 00:08:17.647 "uuid": "9e8f530e-6fda-44ca-8b66-44a2cbda6b4e", 00:08:17.647 "assigned_rate_limits": { 00:08:17.647 "rw_ios_per_sec": 0, 00:08:17.647 "rw_mbytes_per_sec": 0, 00:08:17.647 "r_mbytes_per_sec": 0, 00:08:17.647 "w_mbytes_per_sec": 0 00:08:17.647 }, 00:08:17.647 "claimed": false, 00:08:17.647 "zoned": false, 00:08:17.647 "supported_io_types": { 00:08:17.647 "read": true, 00:08:17.647 "write": true, 00:08:17.647 "unmap": true, 00:08:17.647 "flush": true, 00:08:17.647 "reset": true, 00:08:17.647 "nvme_admin": false, 00:08:17.647 "nvme_io": false, 00:08:17.647 "nvme_io_md": false, 00:08:17.647 "write_zeroes": true, 00:08:17.647 "zcopy": false, 00:08:17.647 "get_zone_info": false, 00:08:17.647 "zone_management": false, 00:08:17.647 "zone_append": false, 00:08:17.647 "compare": false, 00:08:17.647 "compare_and_write": false, 00:08:17.647 "abort": false, 00:08:17.647 "seek_hole": false, 00:08:17.647 "seek_data": false, 00:08:17.647 "copy": false, 00:08:17.647 "nvme_iov_md": false 00:08:17.647 }, 00:08:17.647 "memory_domains": [ 00:08:17.647 { 00:08:17.647 "dma_device_id": "system", 00:08:17.647 "dma_device_type": 1 00:08:17.647 }, 00:08:17.647 { 00:08:17.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.647 "dma_device_type": 2 00:08:17.647 }, 00:08:17.647 { 00:08:17.647 "dma_device_id": "system", 00:08:17.647 "dma_device_type": 1 00:08:17.647 }, 00:08:17.647 { 00:08:17.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.647 "dma_device_type": 2 00:08:17.647 }, 00:08:17.647 { 00:08:17.647 "dma_device_id": "system", 00:08:17.647 "dma_device_type": 1 00:08:17.647 }, 00:08:17.647 { 00:08:17.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.647 "dma_device_type": 2 00:08:17.647 } 00:08:17.647 ], 00:08:17.647 "driver_specific": { 00:08:17.647 "raid": { 00:08:17.647 "uuid": "9e8f530e-6fda-44ca-8b66-44a2cbda6b4e", 00:08:17.647 "strip_size_kb": 64, 00:08:17.647 "state": "online", 00:08:17.647 "raid_level": "concat", 00:08:17.647 "superblock": true, 00:08:17.647 "num_base_bdevs": 3, 00:08:17.647 "num_base_bdevs_discovered": 3, 00:08:17.647 "num_base_bdevs_operational": 3, 00:08:17.647 "base_bdevs_list": [ 00:08:17.647 { 00:08:17.647 "name": "BaseBdev1", 00:08:17.647 "uuid": "d2481cb9-8137-4499-9d3c-d76b8ad28e53", 00:08:17.647 "is_configured": true, 00:08:17.647 "data_offset": 2048, 00:08:17.647 "data_size": 63488 00:08:17.647 }, 00:08:17.647 { 00:08:17.647 "name": "BaseBdev2", 00:08:17.647 "uuid": "bf4b96b7-7e20-417c-9564-c60c18734152", 00:08:17.647 "is_configured": true, 00:08:17.647 "data_offset": 2048, 00:08:17.647 "data_size": 63488 00:08:17.647 }, 00:08:17.647 { 00:08:17.647 "name": "BaseBdev3", 00:08:17.647 "uuid": "67e7ea98-88ee-4d19-bd00-8f60dcb861e7", 00:08:17.647 "is_configured": true, 00:08:17.647 "data_offset": 2048, 00:08:17.647 "data_size": 63488 00:08:17.647 } 00:08:17.647 ] 00:08:17.647 } 00:08:17.647 } 00:08:17.647 }' 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:17.647 BaseBdev2 00:08:17.647 BaseBdev3' 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.647 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.647 [2024-12-14 04:57:28.506680] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:17.647 [2024-12-14 04:57:28.506750] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:17.648 [2024-12-14 04:57:28.506834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.648 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.648 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:17.648 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:17.648 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:17.648 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:17.648 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:17.648 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:17.648 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.648 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:17.648 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:17.648 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.648 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.648 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.648 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.648 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.648 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.907 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.907 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.907 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.907 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.907 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.907 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.907 "name": "Existed_Raid", 00:08:17.907 "uuid": "9e8f530e-6fda-44ca-8b66-44a2cbda6b4e", 00:08:17.907 "strip_size_kb": 64, 00:08:17.907 "state": "offline", 00:08:17.907 "raid_level": "concat", 00:08:17.907 "superblock": true, 00:08:17.907 "num_base_bdevs": 3, 00:08:17.907 "num_base_bdevs_discovered": 2, 00:08:17.907 "num_base_bdevs_operational": 2, 00:08:17.907 "base_bdevs_list": [ 00:08:17.907 { 00:08:17.907 "name": null, 00:08:17.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.907 "is_configured": false, 00:08:17.907 "data_offset": 0, 00:08:17.907 "data_size": 63488 00:08:17.907 }, 00:08:17.907 { 00:08:17.907 "name": "BaseBdev2", 00:08:17.907 "uuid": "bf4b96b7-7e20-417c-9564-c60c18734152", 00:08:17.907 "is_configured": true, 00:08:17.907 "data_offset": 2048, 00:08:17.907 "data_size": 63488 00:08:17.907 }, 00:08:17.907 { 00:08:17.907 "name": "BaseBdev3", 00:08:17.907 "uuid": "67e7ea98-88ee-4d19-bd00-8f60dcb861e7", 00:08:17.907 "is_configured": true, 00:08:17.907 "data_offset": 2048, 00:08:17.907 "data_size": 63488 00:08:17.907 } 00:08:17.907 ] 00:08:17.907 }' 00:08:17.907 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.907 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.166 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:18.166 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:18.166 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.166 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:18.166 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.166 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.166 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.166 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:18.166 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:18.166 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:18.166 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.166 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.167 [2024-12-14 04:57:28.909605] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:18.167 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.167 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:18.167 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:18.167 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:18.167 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.167 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.167 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.167 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.167 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:18.167 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:18.167 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:18.167 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.167 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.167 [2024-12-14 04:57:28.980617] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:18.167 [2024-12-14 04:57:28.980706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:18.167 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.167 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:18.167 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:18.167 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.167 04:57:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:18.167 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.167 04:57:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.167 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.167 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:18.167 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.426 BaseBdev2 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.426 [ 00:08:18.426 { 00:08:18.426 "name": "BaseBdev2", 00:08:18.426 "aliases": [ 00:08:18.426 "c892dd4b-848a-4bb5-97b9-735307a0af25" 00:08:18.426 ], 00:08:18.426 "product_name": "Malloc disk", 00:08:18.426 "block_size": 512, 00:08:18.426 "num_blocks": 65536, 00:08:18.426 "uuid": "c892dd4b-848a-4bb5-97b9-735307a0af25", 00:08:18.426 "assigned_rate_limits": { 00:08:18.426 "rw_ios_per_sec": 0, 00:08:18.426 "rw_mbytes_per_sec": 0, 00:08:18.426 "r_mbytes_per_sec": 0, 00:08:18.426 "w_mbytes_per_sec": 0 00:08:18.426 }, 00:08:18.426 "claimed": false, 00:08:18.426 "zoned": false, 00:08:18.426 "supported_io_types": { 00:08:18.426 "read": true, 00:08:18.426 "write": true, 00:08:18.426 "unmap": true, 00:08:18.426 "flush": true, 00:08:18.426 "reset": true, 00:08:18.426 "nvme_admin": false, 00:08:18.426 "nvme_io": false, 00:08:18.426 "nvme_io_md": false, 00:08:18.426 "write_zeroes": true, 00:08:18.426 "zcopy": true, 00:08:18.426 "get_zone_info": false, 00:08:18.426 "zone_management": false, 00:08:18.426 "zone_append": false, 00:08:18.426 "compare": false, 00:08:18.426 "compare_and_write": false, 00:08:18.426 "abort": true, 00:08:18.426 "seek_hole": false, 00:08:18.426 "seek_data": false, 00:08:18.426 "copy": true, 00:08:18.426 "nvme_iov_md": false 00:08:18.426 }, 00:08:18.426 "memory_domains": [ 00:08:18.426 { 00:08:18.426 "dma_device_id": "system", 00:08:18.426 "dma_device_type": 1 00:08:18.426 }, 00:08:18.426 { 00:08:18.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.426 "dma_device_type": 2 00:08:18.426 } 00:08:18.426 ], 00:08:18.426 "driver_specific": {} 00:08:18.426 } 00:08:18.426 ] 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.426 BaseBdev3 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.426 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.426 [ 00:08:18.426 { 00:08:18.426 "name": "BaseBdev3", 00:08:18.426 "aliases": [ 00:08:18.426 "f04ed7eb-e247-45e7-bf04-d33d63ccf6c8" 00:08:18.426 ], 00:08:18.426 "product_name": "Malloc disk", 00:08:18.426 "block_size": 512, 00:08:18.426 "num_blocks": 65536, 00:08:18.426 "uuid": "f04ed7eb-e247-45e7-bf04-d33d63ccf6c8", 00:08:18.426 "assigned_rate_limits": { 00:08:18.426 "rw_ios_per_sec": 0, 00:08:18.426 "rw_mbytes_per_sec": 0, 00:08:18.426 "r_mbytes_per_sec": 0, 00:08:18.426 "w_mbytes_per_sec": 0 00:08:18.426 }, 00:08:18.426 "claimed": false, 00:08:18.426 "zoned": false, 00:08:18.426 "supported_io_types": { 00:08:18.426 "read": true, 00:08:18.426 "write": true, 00:08:18.426 "unmap": true, 00:08:18.426 "flush": true, 00:08:18.426 "reset": true, 00:08:18.426 "nvme_admin": false, 00:08:18.426 "nvme_io": false, 00:08:18.426 "nvme_io_md": false, 00:08:18.426 "write_zeroes": true, 00:08:18.426 "zcopy": true, 00:08:18.426 "get_zone_info": false, 00:08:18.426 "zone_management": false, 00:08:18.426 "zone_append": false, 00:08:18.426 "compare": false, 00:08:18.426 "compare_and_write": false, 00:08:18.426 "abort": true, 00:08:18.426 "seek_hole": false, 00:08:18.426 "seek_data": false, 00:08:18.426 "copy": true, 00:08:18.426 "nvme_iov_md": false 00:08:18.426 }, 00:08:18.426 "memory_domains": [ 00:08:18.426 { 00:08:18.426 "dma_device_id": "system", 00:08:18.426 "dma_device_type": 1 00:08:18.426 }, 00:08:18.426 { 00:08:18.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.427 "dma_device_type": 2 00:08:18.427 } 00:08:18.427 ], 00:08:18.427 "driver_specific": {} 00:08:18.427 } 00:08:18.427 ] 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.427 [2024-12-14 04:57:29.155848] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:18.427 [2024-12-14 04:57:29.155933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:18.427 [2024-12-14 04:57:29.155975] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:18.427 [2024-12-14 04:57:29.157801] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.427 "name": "Existed_Raid", 00:08:18.427 "uuid": "fd70613d-135a-4ec5-bcf1-53c5eee12b5c", 00:08:18.427 "strip_size_kb": 64, 00:08:18.427 "state": "configuring", 00:08:18.427 "raid_level": "concat", 00:08:18.427 "superblock": true, 00:08:18.427 "num_base_bdevs": 3, 00:08:18.427 "num_base_bdevs_discovered": 2, 00:08:18.427 "num_base_bdevs_operational": 3, 00:08:18.427 "base_bdevs_list": [ 00:08:18.427 { 00:08:18.427 "name": "BaseBdev1", 00:08:18.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.427 "is_configured": false, 00:08:18.427 "data_offset": 0, 00:08:18.427 "data_size": 0 00:08:18.427 }, 00:08:18.427 { 00:08:18.427 "name": "BaseBdev2", 00:08:18.427 "uuid": "c892dd4b-848a-4bb5-97b9-735307a0af25", 00:08:18.427 "is_configured": true, 00:08:18.427 "data_offset": 2048, 00:08:18.427 "data_size": 63488 00:08:18.427 }, 00:08:18.427 { 00:08:18.427 "name": "BaseBdev3", 00:08:18.427 "uuid": "f04ed7eb-e247-45e7-bf04-d33d63ccf6c8", 00:08:18.427 "is_configured": true, 00:08:18.427 "data_offset": 2048, 00:08:18.427 "data_size": 63488 00:08:18.427 } 00:08:18.427 ] 00:08:18.427 }' 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.427 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.996 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:18.996 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.996 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.996 [2024-12-14 04:57:29.619033] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:18.996 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.996 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:18.996 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.996 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.996 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.996 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.996 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.996 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.996 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.996 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.996 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.996 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.996 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.997 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.997 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.997 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.997 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.997 "name": "Existed_Raid", 00:08:18.997 "uuid": "fd70613d-135a-4ec5-bcf1-53c5eee12b5c", 00:08:18.997 "strip_size_kb": 64, 00:08:18.997 "state": "configuring", 00:08:18.997 "raid_level": "concat", 00:08:18.997 "superblock": true, 00:08:18.997 "num_base_bdevs": 3, 00:08:18.997 "num_base_bdevs_discovered": 1, 00:08:18.997 "num_base_bdevs_operational": 3, 00:08:18.997 "base_bdevs_list": [ 00:08:18.997 { 00:08:18.997 "name": "BaseBdev1", 00:08:18.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.997 "is_configured": false, 00:08:18.997 "data_offset": 0, 00:08:18.997 "data_size": 0 00:08:18.997 }, 00:08:18.997 { 00:08:18.997 "name": null, 00:08:18.997 "uuid": "c892dd4b-848a-4bb5-97b9-735307a0af25", 00:08:18.997 "is_configured": false, 00:08:18.997 "data_offset": 0, 00:08:18.997 "data_size": 63488 00:08:18.997 }, 00:08:18.997 { 00:08:18.997 "name": "BaseBdev3", 00:08:18.997 "uuid": "f04ed7eb-e247-45e7-bf04-d33d63ccf6c8", 00:08:18.997 "is_configured": true, 00:08:18.997 "data_offset": 2048, 00:08:18.997 "data_size": 63488 00:08:18.997 } 00:08:18.997 ] 00:08:18.997 }' 00:08:18.997 04:57:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.997 04:57:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.257 [2024-12-14 04:57:30.109112] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.257 BaseBdev1 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.257 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.257 [ 00:08:19.257 { 00:08:19.257 "name": "BaseBdev1", 00:08:19.257 "aliases": [ 00:08:19.257 "7850ff4c-230e-46fe-8f54-c97fff5afdd4" 00:08:19.257 ], 00:08:19.257 "product_name": "Malloc disk", 00:08:19.257 "block_size": 512, 00:08:19.257 "num_blocks": 65536, 00:08:19.257 "uuid": "7850ff4c-230e-46fe-8f54-c97fff5afdd4", 00:08:19.257 "assigned_rate_limits": { 00:08:19.257 "rw_ios_per_sec": 0, 00:08:19.257 "rw_mbytes_per_sec": 0, 00:08:19.516 "r_mbytes_per_sec": 0, 00:08:19.516 "w_mbytes_per_sec": 0 00:08:19.516 }, 00:08:19.516 "claimed": true, 00:08:19.516 "claim_type": "exclusive_write", 00:08:19.516 "zoned": false, 00:08:19.516 "supported_io_types": { 00:08:19.516 "read": true, 00:08:19.516 "write": true, 00:08:19.516 "unmap": true, 00:08:19.516 "flush": true, 00:08:19.516 "reset": true, 00:08:19.516 "nvme_admin": false, 00:08:19.516 "nvme_io": false, 00:08:19.516 "nvme_io_md": false, 00:08:19.516 "write_zeroes": true, 00:08:19.516 "zcopy": true, 00:08:19.516 "get_zone_info": false, 00:08:19.516 "zone_management": false, 00:08:19.516 "zone_append": false, 00:08:19.516 "compare": false, 00:08:19.516 "compare_and_write": false, 00:08:19.516 "abort": true, 00:08:19.516 "seek_hole": false, 00:08:19.516 "seek_data": false, 00:08:19.516 "copy": true, 00:08:19.516 "nvme_iov_md": false 00:08:19.516 }, 00:08:19.516 "memory_domains": [ 00:08:19.516 { 00:08:19.516 "dma_device_id": "system", 00:08:19.516 "dma_device_type": 1 00:08:19.516 }, 00:08:19.516 { 00:08:19.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.516 "dma_device_type": 2 00:08:19.516 } 00:08:19.516 ], 00:08:19.516 "driver_specific": {} 00:08:19.516 } 00:08:19.516 ] 00:08:19.516 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.516 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:19.516 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:19.516 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.516 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.516 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.516 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.516 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.516 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.516 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.516 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.516 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.516 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.516 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.516 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.516 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.516 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.516 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.516 "name": "Existed_Raid", 00:08:19.516 "uuid": "fd70613d-135a-4ec5-bcf1-53c5eee12b5c", 00:08:19.516 "strip_size_kb": 64, 00:08:19.516 "state": "configuring", 00:08:19.516 "raid_level": "concat", 00:08:19.516 "superblock": true, 00:08:19.516 "num_base_bdevs": 3, 00:08:19.516 "num_base_bdevs_discovered": 2, 00:08:19.516 "num_base_bdevs_operational": 3, 00:08:19.516 "base_bdevs_list": [ 00:08:19.516 { 00:08:19.516 "name": "BaseBdev1", 00:08:19.516 "uuid": "7850ff4c-230e-46fe-8f54-c97fff5afdd4", 00:08:19.516 "is_configured": true, 00:08:19.516 "data_offset": 2048, 00:08:19.516 "data_size": 63488 00:08:19.516 }, 00:08:19.516 { 00:08:19.516 "name": null, 00:08:19.516 "uuid": "c892dd4b-848a-4bb5-97b9-735307a0af25", 00:08:19.516 "is_configured": false, 00:08:19.516 "data_offset": 0, 00:08:19.516 "data_size": 63488 00:08:19.516 }, 00:08:19.516 { 00:08:19.516 "name": "BaseBdev3", 00:08:19.516 "uuid": "f04ed7eb-e247-45e7-bf04-d33d63ccf6c8", 00:08:19.516 "is_configured": true, 00:08:19.516 "data_offset": 2048, 00:08:19.516 "data_size": 63488 00:08:19.516 } 00:08:19.516 ] 00:08:19.516 }' 00:08:19.516 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.516 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.775 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.775 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.775 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.775 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:19.775 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.775 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:19.775 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:19.775 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.775 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.775 [2024-12-14 04:57:30.640242] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:19.775 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.775 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:19.775 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.775 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.776 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.776 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.776 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.776 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.776 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.776 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.776 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.776 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.776 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.776 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.776 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.035 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.035 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.035 "name": "Existed_Raid", 00:08:20.035 "uuid": "fd70613d-135a-4ec5-bcf1-53c5eee12b5c", 00:08:20.035 "strip_size_kb": 64, 00:08:20.035 "state": "configuring", 00:08:20.035 "raid_level": "concat", 00:08:20.035 "superblock": true, 00:08:20.035 "num_base_bdevs": 3, 00:08:20.035 "num_base_bdevs_discovered": 1, 00:08:20.035 "num_base_bdevs_operational": 3, 00:08:20.035 "base_bdevs_list": [ 00:08:20.035 { 00:08:20.035 "name": "BaseBdev1", 00:08:20.035 "uuid": "7850ff4c-230e-46fe-8f54-c97fff5afdd4", 00:08:20.035 "is_configured": true, 00:08:20.035 "data_offset": 2048, 00:08:20.035 "data_size": 63488 00:08:20.035 }, 00:08:20.035 { 00:08:20.035 "name": null, 00:08:20.035 "uuid": "c892dd4b-848a-4bb5-97b9-735307a0af25", 00:08:20.035 "is_configured": false, 00:08:20.035 "data_offset": 0, 00:08:20.035 "data_size": 63488 00:08:20.035 }, 00:08:20.035 { 00:08:20.035 "name": null, 00:08:20.035 "uuid": "f04ed7eb-e247-45e7-bf04-d33d63ccf6c8", 00:08:20.035 "is_configured": false, 00:08:20.035 "data_offset": 0, 00:08:20.035 "data_size": 63488 00:08:20.035 } 00:08:20.035 ] 00:08:20.035 }' 00:08:20.035 04:57:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.035 04:57:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.295 [2024-12-14 04:57:31.127425] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.295 "name": "Existed_Raid", 00:08:20.295 "uuid": "fd70613d-135a-4ec5-bcf1-53c5eee12b5c", 00:08:20.295 "strip_size_kb": 64, 00:08:20.295 "state": "configuring", 00:08:20.295 "raid_level": "concat", 00:08:20.295 "superblock": true, 00:08:20.295 "num_base_bdevs": 3, 00:08:20.295 "num_base_bdevs_discovered": 2, 00:08:20.295 "num_base_bdevs_operational": 3, 00:08:20.295 "base_bdevs_list": [ 00:08:20.295 { 00:08:20.295 "name": "BaseBdev1", 00:08:20.295 "uuid": "7850ff4c-230e-46fe-8f54-c97fff5afdd4", 00:08:20.295 "is_configured": true, 00:08:20.295 "data_offset": 2048, 00:08:20.295 "data_size": 63488 00:08:20.295 }, 00:08:20.295 { 00:08:20.295 "name": null, 00:08:20.295 "uuid": "c892dd4b-848a-4bb5-97b9-735307a0af25", 00:08:20.295 "is_configured": false, 00:08:20.295 "data_offset": 0, 00:08:20.295 "data_size": 63488 00:08:20.295 }, 00:08:20.295 { 00:08:20.295 "name": "BaseBdev3", 00:08:20.295 "uuid": "f04ed7eb-e247-45e7-bf04-d33d63ccf6c8", 00:08:20.295 "is_configured": true, 00:08:20.295 "data_offset": 2048, 00:08:20.295 "data_size": 63488 00:08:20.295 } 00:08:20.295 ] 00:08:20.295 }' 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.295 04:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.865 [2024-12-14 04:57:31.594701] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.865 "name": "Existed_Raid", 00:08:20.865 "uuid": "fd70613d-135a-4ec5-bcf1-53c5eee12b5c", 00:08:20.865 "strip_size_kb": 64, 00:08:20.865 "state": "configuring", 00:08:20.865 "raid_level": "concat", 00:08:20.865 "superblock": true, 00:08:20.865 "num_base_bdevs": 3, 00:08:20.865 "num_base_bdevs_discovered": 1, 00:08:20.865 "num_base_bdevs_operational": 3, 00:08:20.865 "base_bdevs_list": [ 00:08:20.865 { 00:08:20.865 "name": null, 00:08:20.865 "uuid": "7850ff4c-230e-46fe-8f54-c97fff5afdd4", 00:08:20.865 "is_configured": false, 00:08:20.865 "data_offset": 0, 00:08:20.865 "data_size": 63488 00:08:20.865 }, 00:08:20.865 { 00:08:20.865 "name": null, 00:08:20.865 "uuid": "c892dd4b-848a-4bb5-97b9-735307a0af25", 00:08:20.865 "is_configured": false, 00:08:20.865 "data_offset": 0, 00:08:20.865 "data_size": 63488 00:08:20.865 }, 00:08:20.865 { 00:08:20.865 "name": "BaseBdev3", 00:08:20.865 "uuid": "f04ed7eb-e247-45e7-bf04-d33d63ccf6c8", 00:08:20.865 "is_configured": true, 00:08:20.865 "data_offset": 2048, 00:08:20.865 "data_size": 63488 00:08:20.865 } 00:08:20.865 ] 00:08:20.865 }' 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.865 04:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.434 [2024-12-14 04:57:32.072456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.434 "name": "Existed_Raid", 00:08:21.434 "uuid": "fd70613d-135a-4ec5-bcf1-53c5eee12b5c", 00:08:21.434 "strip_size_kb": 64, 00:08:21.434 "state": "configuring", 00:08:21.434 "raid_level": "concat", 00:08:21.434 "superblock": true, 00:08:21.434 "num_base_bdevs": 3, 00:08:21.434 "num_base_bdevs_discovered": 2, 00:08:21.434 "num_base_bdevs_operational": 3, 00:08:21.434 "base_bdevs_list": [ 00:08:21.434 { 00:08:21.434 "name": null, 00:08:21.434 "uuid": "7850ff4c-230e-46fe-8f54-c97fff5afdd4", 00:08:21.434 "is_configured": false, 00:08:21.434 "data_offset": 0, 00:08:21.434 "data_size": 63488 00:08:21.434 }, 00:08:21.434 { 00:08:21.434 "name": "BaseBdev2", 00:08:21.434 "uuid": "c892dd4b-848a-4bb5-97b9-735307a0af25", 00:08:21.434 "is_configured": true, 00:08:21.434 "data_offset": 2048, 00:08:21.434 "data_size": 63488 00:08:21.434 }, 00:08:21.434 { 00:08:21.434 "name": "BaseBdev3", 00:08:21.434 "uuid": "f04ed7eb-e247-45e7-bf04-d33d63ccf6c8", 00:08:21.434 "is_configured": true, 00:08:21.434 "data_offset": 2048, 00:08:21.434 "data_size": 63488 00:08:21.434 } 00:08:21.434 ] 00:08:21.434 }' 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.434 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.694 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.694 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:21.694 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.694 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.694 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.694 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:21.694 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:21.694 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.694 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.694 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.694 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.694 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7850ff4c-230e-46fe-8f54-c97fff5afdd4 00:08:21.694 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.694 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.955 [2024-12-14 04:57:32.582651] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:21.955 [2024-12-14 04:57:32.582914] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:21.955 [2024-12-14 04:57:32.582972] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:21.955 [2024-12-14 04:57:32.583282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:21.955 NewBaseBdev 00:08:21.955 [2024-12-14 04:57:32.583458] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:21.955 [2024-12-14 04:57:32.583472] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:21.955 [2024-12-14 04:57:32.583585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.955 [ 00:08:21.955 { 00:08:21.955 "name": "NewBaseBdev", 00:08:21.955 "aliases": [ 00:08:21.955 "7850ff4c-230e-46fe-8f54-c97fff5afdd4" 00:08:21.955 ], 00:08:21.955 "product_name": "Malloc disk", 00:08:21.955 "block_size": 512, 00:08:21.955 "num_blocks": 65536, 00:08:21.955 "uuid": "7850ff4c-230e-46fe-8f54-c97fff5afdd4", 00:08:21.955 "assigned_rate_limits": { 00:08:21.955 "rw_ios_per_sec": 0, 00:08:21.955 "rw_mbytes_per_sec": 0, 00:08:21.955 "r_mbytes_per_sec": 0, 00:08:21.955 "w_mbytes_per_sec": 0 00:08:21.955 }, 00:08:21.955 "claimed": true, 00:08:21.955 "claim_type": "exclusive_write", 00:08:21.955 "zoned": false, 00:08:21.955 "supported_io_types": { 00:08:21.955 "read": true, 00:08:21.955 "write": true, 00:08:21.955 "unmap": true, 00:08:21.955 "flush": true, 00:08:21.955 "reset": true, 00:08:21.955 "nvme_admin": false, 00:08:21.955 "nvme_io": false, 00:08:21.955 "nvme_io_md": false, 00:08:21.955 "write_zeroes": true, 00:08:21.955 "zcopy": true, 00:08:21.955 "get_zone_info": false, 00:08:21.955 "zone_management": false, 00:08:21.955 "zone_append": false, 00:08:21.955 "compare": false, 00:08:21.955 "compare_and_write": false, 00:08:21.955 "abort": true, 00:08:21.955 "seek_hole": false, 00:08:21.955 "seek_data": false, 00:08:21.955 "copy": true, 00:08:21.955 "nvme_iov_md": false 00:08:21.955 }, 00:08:21.955 "memory_domains": [ 00:08:21.955 { 00:08:21.955 "dma_device_id": "system", 00:08:21.955 "dma_device_type": 1 00:08:21.955 }, 00:08:21.955 { 00:08:21.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.955 "dma_device_type": 2 00:08:21.955 } 00:08:21.955 ], 00:08:21.955 "driver_specific": {} 00:08:21.955 } 00:08:21.955 ] 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.955 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.955 "name": "Existed_Raid", 00:08:21.955 "uuid": "fd70613d-135a-4ec5-bcf1-53c5eee12b5c", 00:08:21.955 "strip_size_kb": 64, 00:08:21.955 "state": "online", 00:08:21.955 "raid_level": "concat", 00:08:21.955 "superblock": true, 00:08:21.955 "num_base_bdevs": 3, 00:08:21.955 "num_base_bdevs_discovered": 3, 00:08:21.955 "num_base_bdevs_operational": 3, 00:08:21.955 "base_bdevs_list": [ 00:08:21.955 { 00:08:21.955 "name": "NewBaseBdev", 00:08:21.955 "uuid": "7850ff4c-230e-46fe-8f54-c97fff5afdd4", 00:08:21.955 "is_configured": true, 00:08:21.955 "data_offset": 2048, 00:08:21.955 "data_size": 63488 00:08:21.955 }, 00:08:21.955 { 00:08:21.955 "name": "BaseBdev2", 00:08:21.955 "uuid": "c892dd4b-848a-4bb5-97b9-735307a0af25", 00:08:21.955 "is_configured": true, 00:08:21.955 "data_offset": 2048, 00:08:21.955 "data_size": 63488 00:08:21.955 }, 00:08:21.955 { 00:08:21.955 "name": "BaseBdev3", 00:08:21.955 "uuid": "f04ed7eb-e247-45e7-bf04-d33d63ccf6c8", 00:08:21.955 "is_configured": true, 00:08:21.956 "data_offset": 2048, 00:08:21.956 "data_size": 63488 00:08:21.956 } 00:08:21.956 ] 00:08:21.956 }' 00:08:21.956 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.956 04:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.217 04:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:22.217 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:22.217 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:22.217 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:22.217 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:22.217 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:22.217 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:22.217 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:22.217 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.217 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.217 [2024-12-14 04:57:33.014255] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.217 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.217 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:22.217 "name": "Existed_Raid", 00:08:22.217 "aliases": [ 00:08:22.217 "fd70613d-135a-4ec5-bcf1-53c5eee12b5c" 00:08:22.217 ], 00:08:22.217 "product_name": "Raid Volume", 00:08:22.217 "block_size": 512, 00:08:22.217 "num_blocks": 190464, 00:08:22.217 "uuid": "fd70613d-135a-4ec5-bcf1-53c5eee12b5c", 00:08:22.217 "assigned_rate_limits": { 00:08:22.217 "rw_ios_per_sec": 0, 00:08:22.217 "rw_mbytes_per_sec": 0, 00:08:22.217 "r_mbytes_per_sec": 0, 00:08:22.217 "w_mbytes_per_sec": 0 00:08:22.217 }, 00:08:22.217 "claimed": false, 00:08:22.217 "zoned": false, 00:08:22.217 "supported_io_types": { 00:08:22.217 "read": true, 00:08:22.217 "write": true, 00:08:22.217 "unmap": true, 00:08:22.217 "flush": true, 00:08:22.217 "reset": true, 00:08:22.217 "nvme_admin": false, 00:08:22.217 "nvme_io": false, 00:08:22.217 "nvme_io_md": false, 00:08:22.217 "write_zeroes": true, 00:08:22.217 "zcopy": false, 00:08:22.217 "get_zone_info": false, 00:08:22.217 "zone_management": false, 00:08:22.217 "zone_append": false, 00:08:22.217 "compare": false, 00:08:22.217 "compare_and_write": false, 00:08:22.217 "abort": false, 00:08:22.217 "seek_hole": false, 00:08:22.217 "seek_data": false, 00:08:22.217 "copy": false, 00:08:22.217 "nvme_iov_md": false 00:08:22.217 }, 00:08:22.217 "memory_domains": [ 00:08:22.217 { 00:08:22.217 "dma_device_id": "system", 00:08:22.217 "dma_device_type": 1 00:08:22.217 }, 00:08:22.217 { 00:08:22.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.217 "dma_device_type": 2 00:08:22.217 }, 00:08:22.217 { 00:08:22.217 "dma_device_id": "system", 00:08:22.217 "dma_device_type": 1 00:08:22.217 }, 00:08:22.217 { 00:08:22.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.217 "dma_device_type": 2 00:08:22.217 }, 00:08:22.217 { 00:08:22.217 "dma_device_id": "system", 00:08:22.217 "dma_device_type": 1 00:08:22.217 }, 00:08:22.217 { 00:08:22.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.217 "dma_device_type": 2 00:08:22.217 } 00:08:22.217 ], 00:08:22.217 "driver_specific": { 00:08:22.217 "raid": { 00:08:22.217 "uuid": "fd70613d-135a-4ec5-bcf1-53c5eee12b5c", 00:08:22.217 "strip_size_kb": 64, 00:08:22.217 "state": "online", 00:08:22.217 "raid_level": "concat", 00:08:22.217 "superblock": true, 00:08:22.217 "num_base_bdevs": 3, 00:08:22.217 "num_base_bdevs_discovered": 3, 00:08:22.217 "num_base_bdevs_operational": 3, 00:08:22.217 "base_bdevs_list": [ 00:08:22.217 { 00:08:22.217 "name": "NewBaseBdev", 00:08:22.217 "uuid": "7850ff4c-230e-46fe-8f54-c97fff5afdd4", 00:08:22.217 "is_configured": true, 00:08:22.217 "data_offset": 2048, 00:08:22.217 "data_size": 63488 00:08:22.218 }, 00:08:22.218 { 00:08:22.218 "name": "BaseBdev2", 00:08:22.218 "uuid": "c892dd4b-848a-4bb5-97b9-735307a0af25", 00:08:22.218 "is_configured": true, 00:08:22.218 "data_offset": 2048, 00:08:22.218 "data_size": 63488 00:08:22.218 }, 00:08:22.218 { 00:08:22.218 "name": "BaseBdev3", 00:08:22.218 "uuid": "f04ed7eb-e247-45e7-bf04-d33d63ccf6c8", 00:08:22.218 "is_configured": true, 00:08:22.218 "data_offset": 2048, 00:08:22.218 "data_size": 63488 00:08:22.218 } 00:08:22.218 ] 00:08:22.218 } 00:08:22.218 } 00:08:22.218 }' 00:08:22.218 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:22.478 BaseBdev2 00:08:22.478 BaseBdev3' 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.478 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.479 [2024-12-14 04:57:33.309429] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.479 [2024-12-14 04:57:33.309456] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.479 [2024-12-14 04:57:33.309529] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.479 [2024-12-14 04:57:33.309581] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.479 [2024-12-14 04:57:33.309593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:22.479 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.479 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77370 00:08:22.479 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77370 ']' 00:08:22.479 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 77370 00:08:22.479 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:22.479 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:22.479 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77370 00:08:22.479 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:22.479 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:22.479 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77370' 00:08:22.479 killing process with pid 77370 00:08:22.479 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 77370 00:08:22.479 [2024-12-14 04:57:33.347552] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:22.479 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 77370 00:08:22.747 [2024-12-14 04:57:33.378026] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:23.023 04:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:23.023 00:08:23.023 real 0m8.573s 00:08:23.023 user 0m14.656s 00:08:23.023 sys 0m1.674s 00:08:23.023 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.023 04:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.023 ************************************ 00:08:23.023 END TEST raid_state_function_test_sb 00:08:23.023 ************************************ 00:08:23.023 04:57:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:23.023 04:57:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:23.023 04:57:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.023 04:57:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:23.023 ************************************ 00:08:23.023 START TEST raid_superblock_test 00:08:23.023 ************************************ 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=77972 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 77972 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 77972 ']' 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.023 04:57:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.023 [2024-12-14 04:57:33.787089] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:23.024 [2024-12-14 04:57:33.787320] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77972 ] 00:08:23.294 [2024-12-14 04:57:33.931935] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.294 [2024-12-14 04:57:33.976631] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.294 [2024-12-14 04:57:34.018434] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.294 [2024-12-14 04:57:34.018551] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.881 malloc1 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.881 [2024-12-14 04:57:34.628671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:23.881 [2024-12-14 04:57:34.628800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.881 [2024-12-14 04:57:34.628840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:23.881 [2024-12-14 04:57:34.628875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.881 [2024-12-14 04:57:34.630953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.881 [2024-12-14 04:57:34.631026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:23.881 pt1 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.881 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.881 malloc2 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.882 [2024-12-14 04:57:34.677714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:23.882 [2024-12-14 04:57:34.677826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.882 [2024-12-14 04:57:34.677864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:23.882 [2024-12-14 04:57:34.677891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.882 [2024-12-14 04:57:34.682254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.882 [2024-12-14 04:57:34.682312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:23.882 pt2 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.882 malloc3 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.882 [2024-12-14 04:57:34.707959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:23.882 [2024-12-14 04:57:34.708066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.882 [2024-12-14 04:57:34.708102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:23.882 [2024-12-14 04:57:34.708133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.882 [2024-12-14 04:57:34.710160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.882 [2024-12-14 04:57:34.710240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:23.882 pt3 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.882 [2024-12-14 04:57:34.719989] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:23.882 [2024-12-14 04:57:34.721793] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:23.882 [2024-12-14 04:57:34.721898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:23.882 [2024-12-14 04:57:34.722093] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:23.882 [2024-12-14 04:57:34.722150] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:23.882 [2024-12-14 04:57:34.722433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:23.882 [2024-12-14 04:57:34.722624] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:23.882 [2024-12-14 04:57:34.722676] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:23.882 [2024-12-14 04:57:34.722859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.882 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.142 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.142 "name": "raid_bdev1", 00:08:24.142 "uuid": "e8312db7-a704-43c1-b7e9-47f8216d8306", 00:08:24.142 "strip_size_kb": 64, 00:08:24.142 "state": "online", 00:08:24.142 "raid_level": "concat", 00:08:24.142 "superblock": true, 00:08:24.142 "num_base_bdevs": 3, 00:08:24.142 "num_base_bdevs_discovered": 3, 00:08:24.142 "num_base_bdevs_operational": 3, 00:08:24.142 "base_bdevs_list": [ 00:08:24.142 { 00:08:24.142 "name": "pt1", 00:08:24.142 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:24.142 "is_configured": true, 00:08:24.142 "data_offset": 2048, 00:08:24.142 "data_size": 63488 00:08:24.142 }, 00:08:24.142 { 00:08:24.142 "name": "pt2", 00:08:24.142 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:24.142 "is_configured": true, 00:08:24.142 "data_offset": 2048, 00:08:24.142 "data_size": 63488 00:08:24.142 }, 00:08:24.142 { 00:08:24.142 "name": "pt3", 00:08:24.142 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:24.142 "is_configured": true, 00:08:24.142 "data_offset": 2048, 00:08:24.142 "data_size": 63488 00:08:24.142 } 00:08:24.142 ] 00:08:24.142 }' 00:08:24.142 04:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.142 04:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.400 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:24.400 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:24.400 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:24.400 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:24.400 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:24.400 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:24.400 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:24.400 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.400 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.400 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:24.400 [2024-12-14 04:57:35.171618] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:24.400 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.400 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:24.400 "name": "raid_bdev1", 00:08:24.400 "aliases": [ 00:08:24.400 "e8312db7-a704-43c1-b7e9-47f8216d8306" 00:08:24.400 ], 00:08:24.400 "product_name": "Raid Volume", 00:08:24.400 "block_size": 512, 00:08:24.400 "num_blocks": 190464, 00:08:24.400 "uuid": "e8312db7-a704-43c1-b7e9-47f8216d8306", 00:08:24.400 "assigned_rate_limits": { 00:08:24.400 "rw_ios_per_sec": 0, 00:08:24.400 "rw_mbytes_per_sec": 0, 00:08:24.400 "r_mbytes_per_sec": 0, 00:08:24.400 "w_mbytes_per_sec": 0 00:08:24.400 }, 00:08:24.400 "claimed": false, 00:08:24.400 "zoned": false, 00:08:24.400 "supported_io_types": { 00:08:24.400 "read": true, 00:08:24.400 "write": true, 00:08:24.400 "unmap": true, 00:08:24.400 "flush": true, 00:08:24.400 "reset": true, 00:08:24.400 "nvme_admin": false, 00:08:24.400 "nvme_io": false, 00:08:24.400 "nvme_io_md": false, 00:08:24.400 "write_zeroes": true, 00:08:24.400 "zcopy": false, 00:08:24.400 "get_zone_info": false, 00:08:24.400 "zone_management": false, 00:08:24.400 "zone_append": false, 00:08:24.400 "compare": false, 00:08:24.400 "compare_and_write": false, 00:08:24.400 "abort": false, 00:08:24.400 "seek_hole": false, 00:08:24.400 "seek_data": false, 00:08:24.400 "copy": false, 00:08:24.400 "nvme_iov_md": false 00:08:24.400 }, 00:08:24.400 "memory_domains": [ 00:08:24.400 { 00:08:24.400 "dma_device_id": "system", 00:08:24.400 "dma_device_type": 1 00:08:24.400 }, 00:08:24.400 { 00:08:24.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.400 "dma_device_type": 2 00:08:24.400 }, 00:08:24.400 { 00:08:24.400 "dma_device_id": "system", 00:08:24.400 "dma_device_type": 1 00:08:24.400 }, 00:08:24.400 { 00:08:24.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.400 "dma_device_type": 2 00:08:24.400 }, 00:08:24.400 { 00:08:24.400 "dma_device_id": "system", 00:08:24.400 "dma_device_type": 1 00:08:24.400 }, 00:08:24.400 { 00:08:24.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.400 "dma_device_type": 2 00:08:24.400 } 00:08:24.400 ], 00:08:24.400 "driver_specific": { 00:08:24.400 "raid": { 00:08:24.400 "uuid": "e8312db7-a704-43c1-b7e9-47f8216d8306", 00:08:24.400 "strip_size_kb": 64, 00:08:24.400 "state": "online", 00:08:24.400 "raid_level": "concat", 00:08:24.400 "superblock": true, 00:08:24.401 "num_base_bdevs": 3, 00:08:24.401 "num_base_bdevs_discovered": 3, 00:08:24.401 "num_base_bdevs_operational": 3, 00:08:24.401 "base_bdevs_list": [ 00:08:24.401 { 00:08:24.401 "name": "pt1", 00:08:24.401 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:24.401 "is_configured": true, 00:08:24.401 "data_offset": 2048, 00:08:24.401 "data_size": 63488 00:08:24.401 }, 00:08:24.401 { 00:08:24.401 "name": "pt2", 00:08:24.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:24.401 "is_configured": true, 00:08:24.401 "data_offset": 2048, 00:08:24.401 "data_size": 63488 00:08:24.401 }, 00:08:24.401 { 00:08:24.401 "name": "pt3", 00:08:24.401 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:24.401 "is_configured": true, 00:08:24.401 "data_offset": 2048, 00:08:24.401 "data_size": 63488 00:08:24.401 } 00:08:24.401 ] 00:08:24.401 } 00:08:24.401 } 00:08:24.401 }' 00:08:24.401 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:24.401 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:24.401 pt2 00:08:24.401 pt3' 00:08:24.401 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.659 [2024-12-14 04:57:35.471035] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e8312db7-a704-43c1-b7e9-47f8216d8306 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e8312db7-a704-43c1-b7e9-47f8216d8306 ']' 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.659 [2024-12-14 04:57:35.518702] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.659 [2024-12-14 04:57:35.518767] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.659 [2024-12-14 04:57:35.518901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.659 [2024-12-14 04:57:35.519007] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.659 [2024-12-14 04:57:35.519070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.659 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.919 [2024-12-14 04:57:35.654496] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:24.919 [2024-12-14 04:57:35.656388] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:24.919 [2024-12-14 04:57:35.656477] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:24.919 [2024-12-14 04:57:35.656575] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:24.919 [2024-12-14 04:57:35.656679] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:24.919 [2024-12-14 04:57:35.656759] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:24.919 [2024-12-14 04:57:35.656825] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.919 [2024-12-14 04:57:35.656870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:24.919 request: 00:08:24.919 { 00:08:24.919 "name": "raid_bdev1", 00:08:24.919 "raid_level": "concat", 00:08:24.919 "base_bdevs": [ 00:08:24.919 "malloc1", 00:08:24.919 "malloc2", 00:08:24.919 "malloc3" 00:08:24.919 ], 00:08:24.919 "strip_size_kb": 64, 00:08:24.919 "superblock": false, 00:08:24.919 "method": "bdev_raid_create", 00:08:24.919 "req_id": 1 00:08:24.919 } 00:08:24.919 Got JSON-RPC error response 00:08:24.919 response: 00:08:24.919 { 00:08:24.919 "code": -17, 00:08:24.919 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:24.919 } 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.919 [2024-12-14 04:57:35.722351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:24.919 [2024-12-14 04:57:35.722437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.919 [2024-12-14 04:57:35.722460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:24.919 [2024-12-14 04:57:35.722476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.919 [2024-12-14 04:57:35.724610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.919 [2024-12-14 04:57:35.724652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:24.919 [2024-12-14 04:57:35.724712] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:24.919 [2024-12-14 04:57:35.724763] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:24.919 pt1 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.919 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.919 "name": "raid_bdev1", 00:08:24.919 "uuid": "e8312db7-a704-43c1-b7e9-47f8216d8306", 00:08:24.919 "strip_size_kb": 64, 00:08:24.919 "state": "configuring", 00:08:24.919 "raid_level": "concat", 00:08:24.919 "superblock": true, 00:08:24.919 "num_base_bdevs": 3, 00:08:24.919 "num_base_bdevs_discovered": 1, 00:08:24.919 "num_base_bdevs_operational": 3, 00:08:24.919 "base_bdevs_list": [ 00:08:24.919 { 00:08:24.919 "name": "pt1", 00:08:24.919 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:24.919 "is_configured": true, 00:08:24.919 "data_offset": 2048, 00:08:24.919 "data_size": 63488 00:08:24.919 }, 00:08:24.919 { 00:08:24.919 "name": null, 00:08:24.919 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:24.919 "is_configured": false, 00:08:24.919 "data_offset": 2048, 00:08:24.919 "data_size": 63488 00:08:24.919 }, 00:08:24.919 { 00:08:24.919 "name": null, 00:08:24.919 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:24.919 "is_configured": false, 00:08:24.919 "data_offset": 2048, 00:08:24.919 "data_size": 63488 00:08:24.919 } 00:08:24.919 ] 00:08:24.919 }' 00:08:24.920 04:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.920 04:57:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.486 [2024-12-14 04:57:36.201545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:25.486 [2024-12-14 04:57:36.201652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.486 [2024-12-14 04:57:36.201713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:25.486 [2024-12-14 04:57:36.201753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.486 [2024-12-14 04:57:36.202155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.486 [2024-12-14 04:57:36.202229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:25.486 [2024-12-14 04:57:36.202339] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:25.486 [2024-12-14 04:57:36.202397] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:25.486 pt2 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.486 [2024-12-14 04:57:36.209550] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.486 "name": "raid_bdev1", 00:08:25.486 "uuid": "e8312db7-a704-43c1-b7e9-47f8216d8306", 00:08:25.486 "strip_size_kb": 64, 00:08:25.486 "state": "configuring", 00:08:25.486 "raid_level": "concat", 00:08:25.486 "superblock": true, 00:08:25.486 "num_base_bdevs": 3, 00:08:25.486 "num_base_bdevs_discovered": 1, 00:08:25.486 "num_base_bdevs_operational": 3, 00:08:25.486 "base_bdevs_list": [ 00:08:25.486 { 00:08:25.486 "name": "pt1", 00:08:25.486 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:25.486 "is_configured": true, 00:08:25.486 "data_offset": 2048, 00:08:25.486 "data_size": 63488 00:08:25.486 }, 00:08:25.486 { 00:08:25.486 "name": null, 00:08:25.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:25.486 "is_configured": false, 00:08:25.486 "data_offset": 0, 00:08:25.486 "data_size": 63488 00:08:25.486 }, 00:08:25.486 { 00:08:25.486 "name": null, 00:08:25.486 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:25.486 "is_configured": false, 00:08:25.486 "data_offset": 2048, 00:08:25.486 "data_size": 63488 00:08:25.486 } 00:08:25.486 ] 00:08:25.486 }' 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.486 04:57:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.745 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:25.745 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:25.745 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:25.745 04:57:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.745 04:57:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.003 [2024-12-14 04:57:36.628834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:26.003 [2024-12-14 04:57:36.628892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.003 [2024-12-14 04:57:36.628909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:26.003 [2024-12-14 04:57:36.628917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.003 [2024-12-14 04:57:36.629298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.003 [2024-12-14 04:57:36.629316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:26.003 [2024-12-14 04:57:36.629384] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:26.003 [2024-12-14 04:57:36.629409] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:26.003 pt2 00:08:26.003 04:57:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.004 [2024-12-14 04:57:36.640796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:26.004 [2024-12-14 04:57:36.640840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.004 [2024-12-14 04:57:36.640856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:26.004 [2024-12-14 04:57:36.640864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.004 [2024-12-14 04:57:36.641199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.004 [2024-12-14 04:57:36.641216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:26.004 [2024-12-14 04:57:36.641287] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:26.004 [2024-12-14 04:57:36.641303] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:26.004 [2024-12-14 04:57:36.641390] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:26.004 [2024-12-14 04:57:36.641403] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:26.004 [2024-12-14 04:57:36.641630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:26.004 [2024-12-14 04:57:36.641742] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:26.004 [2024-12-14 04:57:36.641753] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:26.004 [2024-12-14 04:57:36.641845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.004 pt3 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.004 "name": "raid_bdev1", 00:08:26.004 "uuid": "e8312db7-a704-43c1-b7e9-47f8216d8306", 00:08:26.004 "strip_size_kb": 64, 00:08:26.004 "state": "online", 00:08:26.004 "raid_level": "concat", 00:08:26.004 "superblock": true, 00:08:26.004 "num_base_bdevs": 3, 00:08:26.004 "num_base_bdevs_discovered": 3, 00:08:26.004 "num_base_bdevs_operational": 3, 00:08:26.004 "base_bdevs_list": [ 00:08:26.004 { 00:08:26.004 "name": "pt1", 00:08:26.004 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:26.004 "is_configured": true, 00:08:26.004 "data_offset": 2048, 00:08:26.004 "data_size": 63488 00:08:26.004 }, 00:08:26.004 { 00:08:26.004 "name": "pt2", 00:08:26.004 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:26.004 "is_configured": true, 00:08:26.004 "data_offset": 2048, 00:08:26.004 "data_size": 63488 00:08:26.004 }, 00:08:26.004 { 00:08:26.004 "name": "pt3", 00:08:26.004 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:26.004 "is_configured": true, 00:08:26.004 "data_offset": 2048, 00:08:26.004 "data_size": 63488 00:08:26.004 } 00:08:26.004 ] 00:08:26.004 }' 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.004 04:57:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.263 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:26.263 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:26.263 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:26.263 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:26.263 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:26.263 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:26.263 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:26.263 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:26.263 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.263 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.263 [2024-12-14 04:57:37.064433] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.263 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.263 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:26.263 "name": "raid_bdev1", 00:08:26.263 "aliases": [ 00:08:26.263 "e8312db7-a704-43c1-b7e9-47f8216d8306" 00:08:26.263 ], 00:08:26.263 "product_name": "Raid Volume", 00:08:26.263 "block_size": 512, 00:08:26.263 "num_blocks": 190464, 00:08:26.263 "uuid": "e8312db7-a704-43c1-b7e9-47f8216d8306", 00:08:26.263 "assigned_rate_limits": { 00:08:26.263 "rw_ios_per_sec": 0, 00:08:26.263 "rw_mbytes_per_sec": 0, 00:08:26.263 "r_mbytes_per_sec": 0, 00:08:26.263 "w_mbytes_per_sec": 0 00:08:26.263 }, 00:08:26.263 "claimed": false, 00:08:26.263 "zoned": false, 00:08:26.263 "supported_io_types": { 00:08:26.263 "read": true, 00:08:26.263 "write": true, 00:08:26.263 "unmap": true, 00:08:26.263 "flush": true, 00:08:26.263 "reset": true, 00:08:26.263 "nvme_admin": false, 00:08:26.263 "nvme_io": false, 00:08:26.263 "nvme_io_md": false, 00:08:26.263 "write_zeroes": true, 00:08:26.263 "zcopy": false, 00:08:26.263 "get_zone_info": false, 00:08:26.263 "zone_management": false, 00:08:26.263 "zone_append": false, 00:08:26.263 "compare": false, 00:08:26.263 "compare_and_write": false, 00:08:26.263 "abort": false, 00:08:26.263 "seek_hole": false, 00:08:26.263 "seek_data": false, 00:08:26.263 "copy": false, 00:08:26.263 "nvme_iov_md": false 00:08:26.263 }, 00:08:26.263 "memory_domains": [ 00:08:26.263 { 00:08:26.263 "dma_device_id": "system", 00:08:26.263 "dma_device_type": 1 00:08:26.263 }, 00:08:26.263 { 00:08:26.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.263 "dma_device_type": 2 00:08:26.263 }, 00:08:26.263 { 00:08:26.263 "dma_device_id": "system", 00:08:26.263 "dma_device_type": 1 00:08:26.263 }, 00:08:26.263 { 00:08:26.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.263 "dma_device_type": 2 00:08:26.263 }, 00:08:26.263 { 00:08:26.263 "dma_device_id": "system", 00:08:26.263 "dma_device_type": 1 00:08:26.263 }, 00:08:26.263 { 00:08:26.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.263 "dma_device_type": 2 00:08:26.263 } 00:08:26.263 ], 00:08:26.263 "driver_specific": { 00:08:26.263 "raid": { 00:08:26.263 "uuid": "e8312db7-a704-43c1-b7e9-47f8216d8306", 00:08:26.263 "strip_size_kb": 64, 00:08:26.263 "state": "online", 00:08:26.263 "raid_level": "concat", 00:08:26.263 "superblock": true, 00:08:26.263 "num_base_bdevs": 3, 00:08:26.263 "num_base_bdevs_discovered": 3, 00:08:26.263 "num_base_bdevs_operational": 3, 00:08:26.263 "base_bdevs_list": [ 00:08:26.263 { 00:08:26.263 "name": "pt1", 00:08:26.263 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:26.263 "is_configured": true, 00:08:26.263 "data_offset": 2048, 00:08:26.263 "data_size": 63488 00:08:26.263 }, 00:08:26.263 { 00:08:26.263 "name": "pt2", 00:08:26.263 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:26.263 "is_configured": true, 00:08:26.263 "data_offset": 2048, 00:08:26.263 "data_size": 63488 00:08:26.263 }, 00:08:26.263 { 00:08:26.263 "name": "pt3", 00:08:26.263 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:26.263 "is_configured": true, 00:08:26.263 "data_offset": 2048, 00:08:26.263 "data_size": 63488 00:08:26.263 } 00:08:26.263 ] 00:08:26.263 } 00:08:26.263 } 00:08:26.263 }' 00:08:26.263 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:26.522 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:26.522 pt2 00:08:26.522 pt3' 00:08:26.522 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.522 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:26.522 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.522 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.522 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:26.522 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.522 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.522 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.522 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.522 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.522 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.522 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:26.522 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.522 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.522 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.522 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.522 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.522 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.522 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.522 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.523 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:26.523 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.523 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.523 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.523 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.523 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.523 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:26.523 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:26.523 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.523 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.523 [2024-12-14 04:57:37.363842] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.523 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.782 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e8312db7-a704-43c1-b7e9-47f8216d8306 '!=' e8312db7-a704-43c1-b7e9-47f8216d8306 ']' 00:08:26.782 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:26.782 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:26.782 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:26.782 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 77972 00:08:26.782 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 77972 ']' 00:08:26.782 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 77972 00:08:26.782 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:26.782 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:26.782 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77972 00:08:26.782 killing process with pid 77972 00:08:26.782 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:26.782 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:26.782 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77972' 00:08:26.782 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 77972 00:08:26.782 [2024-12-14 04:57:37.440484] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:26.782 [2024-12-14 04:57:37.440562] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.782 [2024-12-14 04:57:37.440625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:26.782 [2024-12-14 04:57:37.440634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:26.782 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 77972 00:08:26.782 [2024-12-14 04:57:37.472807] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:27.041 04:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:27.041 00:08:27.041 real 0m4.008s 00:08:27.041 user 0m6.326s 00:08:27.041 sys 0m0.813s 00:08:27.041 ************************************ 00:08:27.041 END TEST raid_superblock_test 00:08:27.041 ************************************ 00:08:27.041 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.041 04:57:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.041 04:57:37 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:08:27.041 04:57:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:27.041 04:57:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.041 04:57:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:27.041 ************************************ 00:08:27.041 START TEST raid_read_error_test 00:08:27.041 ************************************ 00:08:27.041 04:57:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:08:27.041 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:27.041 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:27.041 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:27.041 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:27.041 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:27.041 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:27.041 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:27.041 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:27.041 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:27.041 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:27.041 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:27.041 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:27.041 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:27.041 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:27.041 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:27.041 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:27.042 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:27.042 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:27.042 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:27.042 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:27.042 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:27.042 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:27.042 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:27.042 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:27.042 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:27.042 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NezZjuOpdU 00:08:27.042 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78210 00:08:27.042 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:27.042 04:57:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78210 00:08:27.042 04:57:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 78210 ']' 00:08:27.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.042 04:57:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.042 04:57:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.042 04:57:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.042 04:57:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.042 04:57:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.042 [2024-12-14 04:57:37.883462] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:27.042 [2024-12-14 04:57:37.883585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78210 ] 00:08:27.301 [2024-12-14 04:57:38.042789] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.301 [2024-12-14 04:57:38.088339] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.301 [2024-12-14 04:57:38.130502] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.301 [2024-12-14 04:57:38.130536] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.869 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.869 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:27.869 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:27.870 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:27.870 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.870 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.870 BaseBdev1_malloc 00:08:27.870 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.870 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:27.870 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.870 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.870 true 00:08:27.870 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.870 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:27.870 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.870 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.870 [2024-12-14 04:57:38.744555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:27.870 [2024-12-14 04:57:38.744604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.870 [2024-12-14 04:57:38.744639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:27.870 [2024-12-14 04:57:38.744648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.870 [2024-12-14 04:57:38.746679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.870 [2024-12-14 04:57:38.746777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:28.129 BaseBdev1 00:08:28.129 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.129 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:28.129 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:28.129 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.129 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.129 BaseBdev2_malloc 00:08:28.129 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.129 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:28.129 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.129 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.129 true 00:08:28.129 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.129 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:28.129 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.129 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.129 [2024-12-14 04:57:38.801506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:28.129 [2024-12-14 04:57:38.801570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.129 [2024-12-14 04:57:38.801596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:28.129 [2024-12-14 04:57:38.801608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.129 [2024-12-14 04:57:38.804453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.129 [2024-12-14 04:57:38.804500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:28.129 BaseBdev2 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.130 BaseBdev3_malloc 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.130 true 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.130 [2024-12-14 04:57:38.841903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:28.130 [2024-12-14 04:57:38.841953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.130 [2024-12-14 04:57:38.841974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:28.130 [2024-12-14 04:57:38.841984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.130 [2024-12-14 04:57:38.844218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.130 [2024-12-14 04:57:38.844249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:28.130 BaseBdev3 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.130 [2024-12-14 04:57:38.853935] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.130 [2024-12-14 04:57:38.855720] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:28.130 [2024-12-14 04:57:38.855804] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:28.130 [2024-12-14 04:57:38.855989] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:28.130 [2024-12-14 04:57:38.856014] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:28.130 [2024-12-14 04:57:38.856260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:28.130 [2024-12-14 04:57:38.856409] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:28.130 [2024-12-14 04:57:38.856429] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:28.130 [2024-12-14 04:57:38.856572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.130 "name": "raid_bdev1", 00:08:28.130 "uuid": "69daa215-b855-4c60-bff6-e61750b77b1f", 00:08:28.130 "strip_size_kb": 64, 00:08:28.130 "state": "online", 00:08:28.130 "raid_level": "concat", 00:08:28.130 "superblock": true, 00:08:28.130 "num_base_bdevs": 3, 00:08:28.130 "num_base_bdevs_discovered": 3, 00:08:28.130 "num_base_bdevs_operational": 3, 00:08:28.130 "base_bdevs_list": [ 00:08:28.130 { 00:08:28.130 "name": "BaseBdev1", 00:08:28.130 "uuid": "aea10aec-ca9e-5299-8baf-67a97b0e8152", 00:08:28.130 "is_configured": true, 00:08:28.130 "data_offset": 2048, 00:08:28.130 "data_size": 63488 00:08:28.130 }, 00:08:28.130 { 00:08:28.130 "name": "BaseBdev2", 00:08:28.130 "uuid": "68c5a0c9-3543-56fb-8683-44afb4f2a364", 00:08:28.130 "is_configured": true, 00:08:28.130 "data_offset": 2048, 00:08:28.130 "data_size": 63488 00:08:28.130 }, 00:08:28.130 { 00:08:28.130 "name": "BaseBdev3", 00:08:28.130 "uuid": "59da1b54-7c0a-564f-81f6-63f8c85cd004", 00:08:28.130 "is_configured": true, 00:08:28.130 "data_offset": 2048, 00:08:28.130 "data_size": 63488 00:08:28.130 } 00:08:28.130 ] 00:08:28.130 }' 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.130 04:57:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.698 04:57:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:28.698 04:57:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:28.698 [2024-12-14 04:57:39.377428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:29.637 04:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:29.637 04:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.637 04:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.638 04:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.638 04:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:29.638 04:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:29.638 04:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:29.638 04:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:29.638 04:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.638 04:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.638 04:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.638 04:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.638 04:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.638 04:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.638 04:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.638 04:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.638 04:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.638 04:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.638 04:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.638 04:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.638 04:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.638 04:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.638 04:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.638 "name": "raid_bdev1", 00:08:29.638 "uuid": "69daa215-b855-4c60-bff6-e61750b77b1f", 00:08:29.638 "strip_size_kb": 64, 00:08:29.638 "state": "online", 00:08:29.638 "raid_level": "concat", 00:08:29.638 "superblock": true, 00:08:29.638 "num_base_bdevs": 3, 00:08:29.638 "num_base_bdevs_discovered": 3, 00:08:29.638 "num_base_bdevs_operational": 3, 00:08:29.638 "base_bdevs_list": [ 00:08:29.638 { 00:08:29.638 "name": "BaseBdev1", 00:08:29.638 "uuid": "aea10aec-ca9e-5299-8baf-67a97b0e8152", 00:08:29.638 "is_configured": true, 00:08:29.638 "data_offset": 2048, 00:08:29.638 "data_size": 63488 00:08:29.638 }, 00:08:29.638 { 00:08:29.638 "name": "BaseBdev2", 00:08:29.638 "uuid": "68c5a0c9-3543-56fb-8683-44afb4f2a364", 00:08:29.638 "is_configured": true, 00:08:29.638 "data_offset": 2048, 00:08:29.638 "data_size": 63488 00:08:29.638 }, 00:08:29.638 { 00:08:29.638 "name": "BaseBdev3", 00:08:29.638 "uuid": "59da1b54-7c0a-564f-81f6-63f8c85cd004", 00:08:29.638 "is_configured": true, 00:08:29.638 "data_offset": 2048, 00:08:29.638 "data_size": 63488 00:08:29.638 } 00:08:29.638 ] 00:08:29.638 }' 00:08:29.638 04:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.638 04:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.897 04:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:29.897 04:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.897 04:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.897 [2024-12-14 04:57:40.704743] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:29.897 [2024-12-14 04:57:40.704780] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:29.897 [2024-12-14 04:57:40.707239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.897 [2024-12-14 04:57:40.707296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.897 [2024-12-14 04:57:40.707330] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.898 [2024-12-14 04:57:40.707348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:29.898 { 00:08:29.898 "results": [ 00:08:29.898 { 00:08:29.898 "job": "raid_bdev1", 00:08:29.898 "core_mask": "0x1", 00:08:29.898 "workload": "randrw", 00:08:29.898 "percentage": 50, 00:08:29.898 "status": "finished", 00:08:29.898 "queue_depth": 1, 00:08:29.898 "io_size": 131072, 00:08:29.898 "runtime": 1.328108, 00:08:29.898 "iops": 17566.342496242774, 00:08:29.898 "mibps": 2195.792812030347, 00:08:29.898 "io_failed": 1, 00:08:29.898 "io_timeout": 0, 00:08:29.898 "avg_latency_us": 78.91689969995127, 00:08:29.898 "min_latency_us": 24.370305676855896, 00:08:29.898 "max_latency_us": 1359.3711790393013 00:08:29.898 } 00:08:29.898 ], 00:08:29.898 "core_count": 1 00:08:29.898 } 00:08:29.898 04:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.898 04:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78210 00:08:29.898 04:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 78210 ']' 00:08:29.898 04:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 78210 00:08:29.898 04:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:29.898 04:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:29.898 04:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78210 00:08:29.898 04:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:29.898 04:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:29.898 killing process with pid 78210 00:08:29.898 04:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78210' 00:08:29.898 04:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 78210 00:08:29.898 [2024-12-14 04:57:40.751604] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:29.898 04:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 78210 00:08:29.898 [2024-12-14 04:57:40.777248] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.157 04:57:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NezZjuOpdU 00:08:30.157 04:57:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:30.157 04:57:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:30.157 04:57:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:30.157 04:57:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:30.157 04:57:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:30.157 04:57:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:30.157 04:57:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:30.157 00:08:30.157 real 0m3.233s 00:08:30.157 user 0m4.064s 00:08:30.157 sys 0m0.491s 00:08:30.157 04:57:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.157 04:57:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.157 ************************************ 00:08:30.157 END TEST raid_read_error_test 00:08:30.157 ************************************ 00:08:30.416 04:57:41 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:08:30.416 04:57:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:30.416 04:57:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.416 04:57:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:30.416 ************************************ 00:08:30.416 START TEST raid_write_error_test 00:08:30.416 ************************************ 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5tO0yfwQuF 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78339 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78339 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 78339 ']' 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:30.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:30.416 04:57:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.416 [2024-12-14 04:57:41.187126] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:30.416 [2024-12-14 04:57:41.187270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78339 ] 00:08:30.675 [2024-12-14 04:57:41.342974] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.675 [2024-12-14 04:57:41.388499] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.675 [2024-12-14 04:57:41.430272] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.675 [2024-12-14 04:57:41.430314] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.244 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.244 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.245 BaseBdev1_malloc 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.245 true 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.245 [2024-12-14 04:57:42.040254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:31.245 [2024-12-14 04:57:42.040302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.245 [2024-12-14 04:57:42.040321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:31.245 [2024-12-14 04:57:42.040330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.245 [2024-12-14 04:57:42.042356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.245 [2024-12-14 04:57:42.042390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:31.245 BaseBdev1 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.245 BaseBdev2_malloc 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.245 true 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.245 [2024-12-14 04:57:42.096575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:31.245 [2024-12-14 04:57:42.096637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.245 [2024-12-14 04:57:42.096664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:31.245 [2024-12-14 04:57:42.096677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.245 [2024-12-14 04:57:42.099616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.245 [2024-12-14 04:57:42.099652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:31.245 BaseBdev2 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.245 BaseBdev3_malloc 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.245 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.504 true 00:08:31.504 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.504 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:31.504 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.504 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.504 [2024-12-14 04:57:42.137012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:31.504 [2024-12-14 04:57:42.137054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.504 [2024-12-14 04:57:42.137070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:31.504 [2024-12-14 04:57:42.137079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.504 [2024-12-14 04:57:42.139077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.504 [2024-12-14 04:57:42.139113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:31.504 BaseBdev3 00:08:31.504 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.504 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:31.504 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.504 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.504 [2024-12-14 04:57:42.149045] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:31.504 [2024-12-14 04:57:42.150812] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:31.504 [2024-12-14 04:57:42.150908] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:31.504 [2024-12-14 04:57:42.151071] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:31.504 [2024-12-14 04:57:42.151092] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:31.504 [2024-12-14 04:57:42.151345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:31.504 [2024-12-14 04:57:42.151492] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:31.504 [2024-12-14 04:57:42.151513] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:31.504 [2024-12-14 04:57:42.151644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.504 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.504 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:31.504 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.505 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.505 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.505 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.505 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.505 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.505 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.505 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.505 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.505 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.505 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.505 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.505 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.505 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.505 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.505 "name": "raid_bdev1", 00:08:31.505 "uuid": "d2970547-48eb-41fa-bc30-c2c393bb5322", 00:08:31.505 "strip_size_kb": 64, 00:08:31.505 "state": "online", 00:08:31.505 "raid_level": "concat", 00:08:31.505 "superblock": true, 00:08:31.505 "num_base_bdevs": 3, 00:08:31.505 "num_base_bdevs_discovered": 3, 00:08:31.505 "num_base_bdevs_operational": 3, 00:08:31.505 "base_bdevs_list": [ 00:08:31.505 { 00:08:31.505 "name": "BaseBdev1", 00:08:31.505 "uuid": "446a8467-5c1c-5cbc-80db-f1018040f151", 00:08:31.505 "is_configured": true, 00:08:31.505 "data_offset": 2048, 00:08:31.505 "data_size": 63488 00:08:31.505 }, 00:08:31.505 { 00:08:31.505 "name": "BaseBdev2", 00:08:31.505 "uuid": "60ab2346-17d2-53f5-87e4-828f396555b3", 00:08:31.505 "is_configured": true, 00:08:31.505 "data_offset": 2048, 00:08:31.505 "data_size": 63488 00:08:31.505 }, 00:08:31.505 { 00:08:31.505 "name": "BaseBdev3", 00:08:31.505 "uuid": "96c875df-0a39-53f5-ae36-557491a971ff", 00:08:31.505 "is_configured": true, 00:08:31.505 "data_offset": 2048, 00:08:31.505 "data_size": 63488 00:08:31.505 } 00:08:31.505 ] 00:08:31.505 }' 00:08:31.505 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.505 04:57:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.764 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:31.764 04:57:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:32.024 [2024-12-14 04:57:42.680580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.961 "name": "raid_bdev1", 00:08:32.961 "uuid": "d2970547-48eb-41fa-bc30-c2c393bb5322", 00:08:32.961 "strip_size_kb": 64, 00:08:32.961 "state": "online", 00:08:32.961 "raid_level": "concat", 00:08:32.961 "superblock": true, 00:08:32.961 "num_base_bdevs": 3, 00:08:32.961 "num_base_bdevs_discovered": 3, 00:08:32.961 "num_base_bdevs_operational": 3, 00:08:32.961 "base_bdevs_list": [ 00:08:32.961 { 00:08:32.961 "name": "BaseBdev1", 00:08:32.961 "uuid": "446a8467-5c1c-5cbc-80db-f1018040f151", 00:08:32.961 "is_configured": true, 00:08:32.961 "data_offset": 2048, 00:08:32.961 "data_size": 63488 00:08:32.961 }, 00:08:32.961 { 00:08:32.961 "name": "BaseBdev2", 00:08:32.961 "uuid": "60ab2346-17d2-53f5-87e4-828f396555b3", 00:08:32.961 "is_configured": true, 00:08:32.961 "data_offset": 2048, 00:08:32.961 "data_size": 63488 00:08:32.961 }, 00:08:32.961 { 00:08:32.961 "name": "BaseBdev3", 00:08:32.961 "uuid": "96c875df-0a39-53f5-ae36-557491a971ff", 00:08:32.961 "is_configured": true, 00:08:32.961 "data_offset": 2048, 00:08:32.961 "data_size": 63488 00:08:32.961 } 00:08:32.961 ] 00:08:32.961 }' 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.961 04:57:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.220 04:57:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:33.220 04:57:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.220 04:57:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.220 [2024-12-14 04:57:44.060315] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.220 [2024-12-14 04:57:44.060350] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.220 [2024-12-14 04:57:44.062805] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.220 [2024-12-14 04:57:44.062859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.220 [2024-12-14 04:57:44.062893] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.220 [2024-12-14 04:57:44.062904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:33.220 { 00:08:33.220 "results": [ 00:08:33.220 { 00:08:33.220 "job": "raid_bdev1", 00:08:33.220 "core_mask": "0x1", 00:08:33.220 "workload": "randrw", 00:08:33.220 "percentage": 50, 00:08:33.220 "status": "finished", 00:08:33.220 "queue_depth": 1, 00:08:33.220 "io_size": 131072, 00:08:33.220 "runtime": 1.380625, 00:08:33.220 "iops": 17466.72702580353, 00:08:33.220 "mibps": 2183.3408782254414, 00:08:33.220 "io_failed": 1, 00:08:33.220 "io_timeout": 0, 00:08:33.220 "avg_latency_us": 79.36330559500985, 00:08:33.220 "min_latency_us": 24.370305676855896, 00:08:33.220 "max_latency_us": 1323.598253275109 00:08:33.220 } 00:08:33.220 ], 00:08:33.220 "core_count": 1 00:08:33.220 } 00:08:33.220 04:57:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.220 04:57:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78339 00:08:33.220 04:57:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 78339 ']' 00:08:33.220 04:57:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 78339 00:08:33.220 04:57:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:33.220 04:57:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:33.220 04:57:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78339 00:08:33.479 04:57:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:33.479 04:57:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:33.479 killing process with pid 78339 00:08:33.479 04:57:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78339' 00:08:33.479 04:57:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 78339 00:08:33.479 [2024-12-14 04:57:44.106791] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:33.479 04:57:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 78339 00:08:33.479 [2024-12-14 04:57:44.132305] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:33.739 04:57:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5tO0yfwQuF 00:08:33.739 04:57:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:33.739 04:57:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:33.739 04:57:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:33.739 04:57:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:33.739 04:57:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:33.739 04:57:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:33.739 04:57:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:33.739 00:08:33.739 real 0m3.294s 00:08:33.739 user 0m4.163s 00:08:33.739 sys 0m0.519s 00:08:33.739 04:57:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.739 04:57:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.739 ************************************ 00:08:33.739 END TEST raid_write_error_test 00:08:33.739 ************************************ 00:08:33.739 04:57:44 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:33.739 04:57:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:08:33.739 04:57:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:33.739 04:57:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.739 04:57:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:33.739 ************************************ 00:08:33.739 START TEST raid_state_function_test 00:08:33.739 ************************************ 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78477 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:33.739 Process raid pid: 78477 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78477' 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78477 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 78477 ']' 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.739 04:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.739 [2024-12-14 04:57:44.541604] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:33.739 [2024-12-14 04:57:44.541738] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.998 [2024-12-14 04:57:44.702583] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.998 [2024-12-14 04:57:44.748720] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.998 [2024-12-14 04:57:44.790593] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.998 [2024-12-14 04:57:44.790630] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.568 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:34.568 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:34.568 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:34.568 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.568 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.568 [2024-12-14 04:57:45.363897] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:34.568 [2024-12-14 04:57:45.363952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:34.568 [2024-12-14 04:57:45.363964] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.568 [2024-12-14 04:57:45.363973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.568 [2024-12-14 04:57:45.363980] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:34.568 [2024-12-14 04:57:45.363991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:34.568 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.568 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:34.568 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.568 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.568 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.568 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.568 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.568 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.568 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.568 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.568 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.568 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.568 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.568 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.568 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.568 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.568 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.568 "name": "Existed_Raid", 00:08:34.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.568 "strip_size_kb": 0, 00:08:34.568 "state": "configuring", 00:08:34.569 "raid_level": "raid1", 00:08:34.569 "superblock": false, 00:08:34.569 "num_base_bdevs": 3, 00:08:34.569 "num_base_bdevs_discovered": 0, 00:08:34.569 "num_base_bdevs_operational": 3, 00:08:34.569 "base_bdevs_list": [ 00:08:34.569 { 00:08:34.569 "name": "BaseBdev1", 00:08:34.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.569 "is_configured": false, 00:08:34.569 "data_offset": 0, 00:08:34.569 "data_size": 0 00:08:34.569 }, 00:08:34.569 { 00:08:34.569 "name": "BaseBdev2", 00:08:34.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.569 "is_configured": false, 00:08:34.569 "data_offset": 0, 00:08:34.569 "data_size": 0 00:08:34.569 }, 00:08:34.569 { 00:08:34.569 "name": "BaseBdev3", 00:08:34.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.569 "is_configured": false, 00:08:34.569 "data_offset": 0, 00:08:34.569 "data_size": 0 00:08:34.569 } 00:08:34.569 ] 00:08:34.569 }' 00:08:34.569 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.569 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.140 [2024-12-14 04:57:45.767206] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:35.140 [2024-12-14 04:57:45.767250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.140 [2024-12-14 04:57:45.779238] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:35.140 [2024-12-14 04:57:45.779278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:35.140 [2024-12-14 04:57:45.779286] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:35.140 [2024-12-14 04:57:45.779296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:35.140 [2024-12-14 04:57:45.779302] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:35.140 [2024-12-14 04:57:45.779311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.140 [2024-12-14 04:57:45.799995] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.140 BaseBdev1 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.140 [ 00:08:35.140 { 00:08:35.140 "name": "BaseBdev1", 00:08:35.140 "aliases": [ 00:08:35.140 "1899944d-f3bb-4439-89c5-cee9ad311c18" 00:08:35.140 ], 00:08:35.140 "product_name": "Malloc disk", 00:08:35.140 "block_size": 512, 00:08:35.140 "num_blocks": 65536, 00:08:35.140 "uuid": "1899944d-f3bb-4439-89c5-cee9ad311c18", 00:08:35.140 "assigned_rate_limits": { 00:08:35.140 "rw_ios_per_sec": 0, 00:08:35.140 "rw_mbytes_per_sec": 0, 00:08:35.140 "r_mbytes_per_sec": 0, 00:08:35.140 "w_mbytes_per_sec": 0 00:08:35.140 }, 00:08:35.140 "claimed": true, 00:08:35.140 "claim_type": "exclusive_write", 00:08:35.140 "zoned": false, 00:08:35.140 "supported_io_types": { 00:08:35.140 "read": true, 00:08:35.140 "write": true, 00:08:35.140 "unmap": true, 00:08:35.140 "flush": true, 00:08:35.140 "reset": true, 00:08:35.140 "nvme_admin": false, 00:08:35.140 "nvme_io": false, 00:08:35.140 "nvme_io_md": false, 00:08:35.140 "write_zeroes": true, 00:08:35.140 "zcopy": true, 00:08:35.140 "get_zone_info": false, 00:08:35.140 "zone_management": false, 00:08:35.140 "zone_append": false, 00:08:35.140 "compare": false, 00:08:35.140 "compare_and_write": false, 00:08:35.140 "abort": true, 00:08:35.140 "seek_hole": false, 00:08:35.140 "seek_data": false, 00:08:35.140 "copy": true, 00:08:35.140 "nvme_iov_md": false 00:08:35.140 }, 00:08:35.140 "memory_domains": [ 00:08:35.140 { 00:08:35.140 "dma_device_id": "system", 00:08:35.140 "dma_device_type": 1 00:08:35.140 }, 00:08:35.140 { 00:08:35.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.140 "dma_device_type": 2 00:08:35.140 } 00:08:35.140 ], 00:08:35.140 "driver_specific": {} 00:08:35.140 } 00:08:35.140 ] 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.140 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.141 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.141 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.141 "name": "Existed_Raid", 00:08:35.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.141 "strip_size_kb": 0, 00:08:35.141 "state": "configuring", 00:08:35.141 "raid_level": "raid1", 00:08:35.141 "superblock": false, 00:08:35.141 "num_base_bdevs": 3, 00:08:35.141 "num_base_bdevs_discovered": 1, 00:08:35.141 "num_base_bdevs_operational": 3, 00:08:35.141 "base_bdevs_list": [ 00:08:35.141 { 00:08:35.141 "name": "BaseBdev1", 00:08:35.141 "uuid": "1899944d-f3bb-4439-89c5-cee9ad311c18", 00:08:35.141 "is_configured": true, 00:08:35.141 "data_offset": 0, 00:08:35.141 "data_size": 65536 00:08:35.141 }, 00:08:35.141 { 00:08:35.141 "name": "BaseBdev2", 00:08:35.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.141 "is_configured": false, 00:08:35.141 "data_offset": 0, 00:08:35.141 "data_size": 0 00:08:35.141 }, 00:08:35.141 { 00:08:35.141 "name": "BaseBdev3", 00:08:35.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.141 "is_configured": false, 00:08:35.141 "data_offset": 0, 00:08:35.141 "data_size": 0 00:08:35.141 } 00:08:35.141 ] 00:08:35.141 }' 00:08:35.141 04:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.141 04:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.400 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:35.400 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.400 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.400 [2024-12-14 04:57:46.263252] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:35.400 [2024-12-14 04:57:46.263304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:35.400 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.400 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:35.400 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.400 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.400 [2024-12-14 04:57:46.275275] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.400 [2024-12-14 04:57:46.277060] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:35.400 [2024-12-14 04:57:46.277106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:35.400 [2024-12-14 04:57:46.277116] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:35.400 [2024-12-14 04:57:46.277126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:35.400 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.400 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:35.400 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:35.659 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:35.659 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.659 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.659 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.659 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.659 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.659 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.659 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.659 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.659 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.659 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.659 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.659 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.659 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.659 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.659 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.659 "name": "Existed_Raid", 00:08:35.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.659 "strip_size_kb": 0, 00:08:35.659 "state": "configuring", 00:08:35.659 "raid_level": "raid1", 00:08:35.659 "superblock": false, 00:08:35.659 "num_base_bdevs": 3, 00:08:35.659 "num_base_bdevs_discovered": 1, 00:08:35.659 "num_base_bdevs_operational": 3, 00:08:35.659 "base_bdevs_list": [ 00:08:35.659 { 00:08:35.659 "name": "BaseBdev1", 00:08:35.659 "uuid": "1899944d-f3bb-4439-89c5-cee9ad311c18", 00:08:35.659 "is_configured": true, 00:08:35.659 "data_offset": 0, 00:08:35.659 "data_size": 65536 00:08:35.659 }, 00:08:35.659 { 00:08:35.659 "name": "BaseBdev2", 00:08:35.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.659 "is_configured": false, 00:08:35.659 "data_offset": 0, 00:08:35.659 "data_size": 0 00:08:35.659 }, 00:08:35.659 { 00:08:35.659 "name": "BaseBdev3", 00:08:35.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.659 "is_configured": false, 00:08:35.659 "data_offset": 0, 00:08:35.659 "data_size": 0 00:08:35.659 } 00:08:35.659 ] 00:08:35.659 }' 00:08:35.659 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.659 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.919 [2024-12-14 04:57:46.733860] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:35.919 BaseBdev2 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.919 [ 00:08:35.919 { 00:08:35.919 "name": "BaseBdev2", 00:08:35.919 "aliases": [ 00:08:35.919 "3e44f5a3-7d04-4d5a-90ac-d6887f0421cc" 00:08:35.919 ], 00:08:35.919 "product_name": "Malloc disk", 00:08:35.919 "block_size": 512, 00:08:35.919 "num_blocks": 65536, 00:08:35.919 "uuid": "3e44f5a3-7d04-4d5a-90ac-d6887f0421cc", 00:08:35.919 "assigned_rate_limits": { 00:08:35.919 "rw_ios_per_sec": 0, 00:08:35.919 "rw_mbytes_per_sec": 0, 00:08:35.919 "r_mbytes_per_sec": 0, 00:08:35.919 "w_mbytes_per_sec": 0 00:08:35.919 }, 00:08:35.919 "claimed": true, 00:08:35.919 "claim_type": "exclusive_write", 00:08:35.919 "zoned": false, 00:08:35.919 "supported_io_types": { 00:08:35.919 "read": true, 00:08:35.919 "write": true, 00:08:35.919 "unmap": true, 00:08:35.919 "flush": true, 00:08:35.919 "reset": true, 00:08:35.919 "nvme_admin": false, 00:08:35.919 "nvme_io": false, 00:08:35.919 "nvme_io_md": false, 00:08:35.919 "write_zeroes": true, 00:08:35.919 "zcopy": true, 00:08:35.919 "get_zone_info": false, 00:08:35.919 "zone_management": false, 00:08:35.919 "zone_append": false, 00:08:35.919 "compare": false, 00:08:35.919 "compare_and_write": false, 00:08:35.919 "abort": true, 00:08:35.919 "seek_hole": false, 00:08:35.919 "seek_data": false, 00:08:35.919 "copy": true, 00:08:35.919 "nvme_iov_md": false 00:08:35.919 }, 00:08:35.919 "memory_domains": [ 00:08:35.919 { 00:08:35.919 "dma_device_id": "system", 00:08:35.919 "dma_device_type": 1 00:08:35.919 }, 00:08:35.919 { 00:08:35.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.919 "dma_device_type": 2 00:08:35.919 } 00:08:35.919 ], 00:08:35.919 "driver_specific": {} 00:08:35.919 } 00:08:35.919 ] 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.919 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.179 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.179 "name": "Existed_Raid", 00:08:36.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.179 "strip_size_kb": 0, 00:08:36.179 "state": "configuring", 00:08:36.179 "raid_level": "raid1", 00:08:36.179 "superblock": false, 00:08:36.179 "num_base_bdevs": 3, 00:08:36.179 "num_base_bdevs_discovered": 2, 00:08:36.179 "num_base_bdevs_operational": 3, 00:08:36.179 "base_bdevs_list": [ 00:08:36.179 { 00:08:36.179 "name": "BaseBdev1", 00:08:36.179 "uuid": "1899944d-f3bb-4439-89c5-cee9ad311c18", 00:08:36.179 "is_configured": true, 00:08:36.179 "data_offset": 0, 00:08:36.179 "data_size": 65536 00:08:36.179 }, 00:08:36.179 { 00:08:36.179 "name": "BaseBdev2", 00:08:36.179 "uuid": "3e44f5a3-7d04-4d5a-90ac-d6887f0421cc", 00:08:36.179 "is_configured": true, 00:08:36.179 "data_offset": 0, 00:08:36.179 "data_size": 65536 00:08:36.179 }, 00:08:36.179 { 00:08:36.179 "name": "BaseBdev3", 00:08:36.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.179 "is_configured": false, 00:08:36.179 "data_offset": 0, 00:08:36.179 "data_size": 0 00:08:36.179 } 00:08:36.179 ] 00:08:36.179 }' 00:08:36.179 04:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.179 04:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.437 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:36.437 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.437 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.437 [2024-12-14 04:57:47.239937] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:36.437 [2024-12-14 04:57:47.239987] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:36.437 [2024-12-14 04:57:47.239998] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:36.437 [2024-12-14 04:57:47.240325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:36.437 [2024-12-14 04:57:47.240488] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:36.437 [2024-12-14 04:57:47.240508] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:36.437 [2024-12-14 04:57:47.240699] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.437 BaseBdev3 00:08:36.437 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.437 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:36.437 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:36.437 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:36.437 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:36.437 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:36.437 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:36.437 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:36.437 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.437 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.437 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.437 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:36.437 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.437 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.437 [ 00:08:36.437 { 00:08:36.437 "name": "BaseBdev3", 00:08:36.438 "aliases": [ 00:08:36.438 "66c97568-010a-4f37-bdae-2090bef322ba" 00:08:36.438 ], 00:08:36.438 "product_name": "Malloc disk", 00:08:36.438 "block_size": 512, 00:08:36.438 "num_blocks": 65536, 00:08:36.438 "uuid": "66c97568-010a-4f37-bdae-2090bef322ba", 00:08:36.438 "assigned_rate_limits": { 00:08:36.438 "rw_ios_per_sec": 0, 00:08:36.438 "rw_mbytes_per_sec": 0, 00:08:36.438 "r_mbytes_per_sec": 0, 00:08:36.438 "w_mbytes_per_sec": 0 00:08:36.438 }, 00:08:36.438 "claimed": true, 00:08:36.438 "claim_type": "exclusive_write", 00:08:36.438 "zoned": false, 00:08:36.438 "supported_io_types": { 00:08:36.438 "read": true, 00:08:36.438 "write": true, 00:08:36.438 "unmap": true, 00:08:36.438 "flush": true, 00:08:36.438 "reset": true, 00:08:36.438 "nvme_admin": false, 00:08:36.438 "nvme_io": false, 00:08:36.438 "nvme_io_md": false, 00:08:36.438 "write_zeroes": true, 00:08:36.438 "zcopy": true, 00:08:36.438 "get_zone_info": false, 00:08:36.438 "zone_management": false, 00:08:36.438 "zone_append": false, 00:08:36.438 "compare": false, 00:08:36.438 "compare_and_write": false, 00:08:36.438 "abort": true, 00:08:36.438 "seek_hole": false, 00:08:36.438 "seek_data": false, 00:08:36.438 "copy": true, 00:08:36.438 "nvme_iov_md": false 00:08:36.438 }, 00:08:36.438 "memory_domains": [ 00:08:36.438 { 00:08:36.438 "dma_device_id": "system", 00:08:36.438 "dma_device_type": 1 00:08:36.438 }, 00:08:36.438 { 00:08:36.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.438 "dma_device_type": 2 00:08:36.438 } 00:08:36.438 ], 00:08:36.438 "driver_specific": {} 00:08:36.438 } 00:08:36.438 ] 00:08:36.438 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.438 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:36.438 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:36.438 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:36.438 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:36.438 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.438 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.438 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:36.438 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:36.438 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.438 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.438 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.438 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.438 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.438 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.438 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.438 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.438 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.438 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.438 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.438 "name": "Existed_Raid", 00:08:36.438 "uuid": "dec88f91-56e7-49fb-a867-94bba0baf2c2", 00:08:36.438 "strip_size_kb": 0, 00:08:36.438 "state": "online", 00:08:36.438 "raid_level": "raid1", 00:08:36.438 "superblock": false, 00:08:36.438 "num_base_bdevs": 3, 00:08:36.438 "num_base_bdevs_discovered": 3, 00:08:36.438 "num_base_bdevs_operational": 3, 00:08:36.438 "base_bdevs_list": [ 00:08:36.438 { 00:08:36.438 "name": "BaseBdev1", 00:08:36.438 "uuid": "1899944d-f3bb-4439-89c5-cee9ad311c18", 00:08:36.438 "is_configured": true, 00:08:36.438 "data_offset": 0, 00:08:36.438 "data_size": 65536 00:08:36.438 }, 00:08:36.438 { 00:08:36.438 "name": "BaseBdev2", 00:08:36.438 "uuid": "3e44f5a3-7d04-4d5a-90ac-d6887f0421cc", 00:08:36.438 "is_configured": true, 00:08:36.438 "data_offset": 0, 00:08:36.438 "data_size": 65536 00:08:36.438 }, 00:08:36.438 { 00:08:36.438 "name": "BaseBdev3", 00:08:36.438 "uuid": "66c97568-010a-4f37-bdae-2090bef322ba", 00:08:36.438 "is_configured": true, 00:08:36.438 "data_offset": 0, 00:08:36.438 "data_size": 65536 00:08:36.438 } 00:08:36.438 ] 00:08:36.438 }' 00:08:36.438 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.438 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.006 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:37.006 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:37.006 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:37.006 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:37.006 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:37.006 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:37.006 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:37.006 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.006 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.006 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:37.006 [2024-12-14 04:57:47.703508] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:37.006 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.006 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:37.006 "name": "Existed_Raid", 00:08:37.006 "aliases": [ 00:08:37.006 "dec88f91-56e7-49fb-a867-94bba0baf2c2" 00:08:37.006 ], 00:08:37.006 "product_name": "Raid Volume", 00:08:37.006 "block_size": 512, 00:08:37.006 "num_blocks": 65536, 00:08:37.006 "uuid": "dec88f91-56e7-49fb-a867-94bba0baf2c2", 00:08:37.006 "assigned_rate_limits": { 00:08:37.006 "rw_ios_per_sec": 0, 00:08:37.006 "rw_mbytes_per_sec": 0, 00:08:37.006 "r_mbytes_per_sec": 0, 00:08:37.006 "w_mbytes_per_sec": 0 00:08:37.006 }, 00:08:37.006 "claimed": false, 00:08:37.006 "zoned": false, 00:08:37.006 "supported_io_types": { 00:08:37.006 "read": true, 00:08:37.006 "write": true, 00:08:37.006 "unmap": false, 00:08:37.006 "flush": false, 00:08:37.006 "reset": true, 00:08:37.006 "nvme_admin": false, 00:08:37.006 "nvme_io": false, 00:08:37.006 "nvme_io_md": false, 00:08:37.006 "write_zeroes": true, 00:08:37.006 "zcopy": false, 00:08:37.006 "get_zone_info": false, 00:08:37.006 "zone_management": false, 00:08:37.006 "zone_append": false, 00:08:37.006 "compare": false, 00:08:37.006 "compare_and_write": false, 00:08:37.006 "abort": false, 00:08:37.006 "seek_hole": false, 00:08:37.006 "seek_data": false, 00:08:37.006 "copy": false, 00:08:37.006 "nvme_iov_md": false 00:08:37.006 }, 00:08:37.006 "memory_domains": [ 00:08:37.006 { 00:08:37.006 "dma_device_id": "system", 00:08:37.006 "dma_device_type": 1 00:08:37.006 }, 00:08:37.006 { 00:08:37.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.006 "dma_device_type": 2 00:08:37.006 }, 00:08:37.006 { 00:08:37.006 "dma_device_id": "system", 00:08:37.006 "dma_device_type": 1 00:08:37.006 }, 00:08:37.006 { 00:08:37.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.006 "dma_device_type": 2 00:08:37.006 }, 00:08:37.006 { 00:08:37.006 "dma_device_id": "system", 00:08:37.006 "dma_device_type": 1 00:08:37.006 }, 00:08:37.006 { 00:08:37.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.006 "dma_device_type": 2 00:08:37.006 } 00:08:37.006 ], 00:08:37.006 "driver_specific": { 00:08:37.006 "raid": { 00:08:37.006 "uuid": "dec88f91-56e7-49fb-a867-94bba0baf2c2", 00:08:37.007 "strip_size_kb": 0, 00:08:37.007 "state": "online", 00:08:37.007 "raid_level": "raid1", 00:08:37.007 "superblock": false, 00:08:37.007 "num_base_bdevs": 3, 00:08:37.007 "num_base_bdevs_discovered": 3, 00:08:37.007 "num_base_bdevs_operational": 3, 00:08:37.007 "base_bdevs_list": [ 00:08:37.007 { 00:08:37.007 "name": "BaseBdev1", 00:08:37.007 "uuid": "1899944d-f3bb-4439-89c5-cee9ad311c18", 00:08:37.007 "is_configured": true, 00:08:37.007 "data_offset": 0, 00:08:37.007 "data_size": 65536 00:08:37.007 }, 00:08:37.007 { 00:08:37.007 "name": "BaseBdev2", 00:08:37.007 "uuid": "3e44f5a3-7d04-4d5a-90ac-d6887f0421cc", 00:08:37.007 "is_configured": true, 00:08:37.007 "data_offset": 0, 00:08:37.007 "data_size": 65536 00:08:37.007 }, 00:08:37.007 { 00:08:37.007 "name": "BaseBdev3", 00:08:37.007 "uuid": "66c97568-010a-4f37-bdae-2090bef322ba", 00:08:37.007 "is_configured": true, 00:08:37.007 "data_offset": 0, 00:08:37.007 "data_size": 65536 00:08:37.007 } 00:08:37.007 ] 00:08:37.007 } 00:08:37.007 } 00:08:37.007 }' 00:08:37.007 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:37.007 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:37.007 BaseBdev2 00:08:37.007 BaseBdev3' 00:08:37.007 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.007 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:37.007 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.007 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:37.007 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.007 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.007 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.007 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.007 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.007 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.007 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.007 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:37.007 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.007 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.007 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.007 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.266 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.266 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.267 [2024-12-14 04:57:47.954801] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.267 04:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.267 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.267 "name": "Existed_Raid", 00:08:37.267 "uuid": "dec88f91-56e7-49fb-a867-94bba0baf2c2", 00:08:37.267 "strip_size_kb": 0, 00:08:37.267 "state": "online", 00:08:37.267 "raid_level": "raid1", 00:08:37.267 "superblock": false, 00:08:37.267 "num_base_bdevs": 3, 00:08:37.267 "num_base_bdevs_discovered": 2, 00:08:37.267 "num_base_bdevs_operational": 2, 00:08:37.267 "base_bdevs_list": [ 00:08:37.267 { 00:08:37.267 "name": null, 00:08:37.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.267 "is_configured": false, 00:08:37.267 "data_offset": 0, 00:08:37.267 "data_size": 65536 00:08:37.267 }, 00:08:37.267 { 00:08:37.267 "name": "BaseBdev2", 00:08:37.267 "uuid": "3e44f5a3-7d04-4d5a-90ac-d6887f0421cc", 00:08:37.267 "is_configured": true, 00:08:37.267 "data_offset": 0, 00:08:37.267 "data_size": 65536 00:08:37.267 }, 00:08:37.267 { 00:08:37.267 "name": "BaseBdev3", 00:08:37.267 "uuid": "66c97568-010a-4f37-bdae-2090bef322ba", 00:08:37.267 "is_configured": true, 00:08:37.267 "data_offset": 0, 00:08:37.267 "data_size": 65536 00:08:37.267 } 00:08:37.267 ] 00:08:37.267 }' 00:08:37.267 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.267 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.835 [2024-12-14 04:57:48.449266] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.835 [2024-12-14 04:57:48.520376] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:37.835 [2024-12-14 04:57:48.520473] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:37.835 [2024-12-14 04:57:48.532076] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.835 [2024-12-14 04:57:48.532122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.835 [2024-12-14 04:57:48.532137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.835 BaseBdev2 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.835 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.835 [ 00:08:37.835 { 00:08:37.835 "name": "BaseBdev2", 00:08:37.835 "aliases": [ 00:08:37.835 "80ad240b-2dfe-4b8f-a02a-3c667ab00b9c" 00:08:37.835 ], 00:08:37.835 "product_name": "Malloc disk", 00:08:37.835 "block_size": 512, 00:08:37.835 "num_blocks": 65536, 00:08:37.835 "uuid": "80ad240b-2dfe-4b8f-a02a-3c667ab00b9c", 00:08:37.835 "assigned_rate_limits": { 00:08:37.835 "rw_ios_per_sec": 0, 00:08:37.835 "rw_mbytes_per_sec": 0, 00:08:37.835 "r_mbytes_per_sec": 0, 00:08:37.835 "w_mbytes_per_sec": 0 00:08:37.835 }, 00:08:37.835 "claimed": false, 00:08:37.835 "zoned": false, 00:08:37.835 "supported_io_types": { 00:08:37.835 "read": true, 00:08:37.835 "write": true, 00:08:37.835 "unmap": true, 00:08:37.835 "flush": true, 00:08:37.835 "reset": true, 00:08:37.835 "nvme_admin": false, 00:08:37.835 "nvme_io": false, 00:08:37.835 "nvme_io_md": false, 00:08:37.835 "write_zeroes": true, 00:08:37.835 "zcopy": true, 00:08:37.835 "get_zone_info": false, 00:08:37.835 "zone_management": false, 00:08:37.835 "zone_append": false, 00:08:37.835 "compare": false, 00:08:37.835 "compare_and_write": false, 00:08:37.835 "abort": true, 00:08:37.835 "seek_hole": false, 00:08:37.835 "seek_data": false, 00:08:37.835 "copy": true, 00:08:37.835 "nvme_iov_md": false 00:08:37.835 }, 00:08:37.835 "memory_domains": [ 00:08:37.835 { 00:08:37.835 "dma_device_id": "system", 00:08:37.835 "dma_device_type": 1 00:08:37.835 }, 00:08:37.835 { 00:08:37.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.835 "dma_device_type": 2 00:08:37.836 } 00:08:37.836 ], 00:08:37.836 "driver_specific": {} 00:08:37.836 } 00:08:37.836 ] 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.836 BaseBdev3 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.836 [ 00:08:37.836 { 00:08:37.836 "name": "BaseBdev3", 00:08:37.836 "aliases": [ 00:08:37.836 "3c5a2d56-91b6-4ea6-a1bd-566e588f5bd5" 00:08:37.836 ], 00:08:37.836 "product_name": "Malloc disk", 00:08:37.836 "block_size": 512, 00:08:37.836 "num_blocks": 65536, 00:08:37.836 "uuid": "3c5a2d56-91b6-4ea6-a1bd-566e588f5bd5", 00:08:37.836 "assigned_rate_limits": { 00:08:37.836 "rw_ios_per_sec": 0, 00:08:37.836 "rw_mbytes_per_sec": 0, 00:08:37.836 "r_mbytes_per_sec": 0, 00:08:37.836 "w_mbytes_per_sec": 0 00:08:37.836 }, 00:08:37.836 "claimed": false, 00:08:37.836 "zoned": false, 00:08:37.836 "supported_io_types": { 00:08:37.836 "read": true, 00:08:37.836 "write": true, 00:08:37.836 "unmap": true, 00:08:37.836 "flush": true, 00:08:37.836 "reset": true, 00:08:37.836 "nvme_admin": false, 00:08:37.836 "nvme_io": false, 00:08:37.836 "nvme_io_md": false, 00:08:37.836 "write_zeroes": true, 00:08:37.836 "zcopy": true, 00:08:37.836 "get_zone_info": false, 00:08:37.836 "zone_management": false, 00:08:37.836 "zone_append": false, 00:08:37.836 "compare": false, 00:08:37.836 "compare_and_write": false, 00:08:37.836 "abort": true, 00:08:37.836 "seek_hole": false, 00:08:37.836 "seek_data": false, 00:08:37.836 "copy": true, 00:08:37.836 "nvme_iov_md": false 00:08:37.836 }, 00:08:37.836 "memory_domains": [ 00:08:37.836 { 00:08:37.836 "dma_device_id": "system", 00:08:37.836 "dma_device_type": 1 00:08:37.836 }, 00:08:37.836 { 00:08:37.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.836 "dma_device_type": 2 00:08:37.836 } 00:08:37.836 ], 00:08:37.836 "driver_specific": {} 00:08:37.836 } 00:08:37.836 ] 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.836 [2024-12-14 04:57:48.667328] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:37.836 [2024-12-14 04:57:48.667373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:37.836 [2024-12-14 04:57:48.667391] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:37.836 [2024-12-14 04:57:48.669209] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.836 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.095 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.095 "name": "Existed_Raid", 00:08:38.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.095 "strip_size_kb": 0, 00:08:38.095 "state": "configuring", 00:08:38.095 "raid_level": "raid1", 00:08:38.095 "superblock": false, 00:08:38.095 "num_base_bdevs": 3, 00:08:38.095 "num_base_bdevs_discovered": 2, 00:08:38.095 "num_base_bdevs_operational": 3, 00:08:38.095 "base_bdevs_list": [ 00:08:38.095 { 00:08:38.095 "name": "BaseBdev1", 00:08:38.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.095 "is_configured": false, 00:08:38.095 "data_offset": 0, 00:08:38.095 "data_size": 0 00:08:38.095 }, 00:08:38.095 { 00:08:38.095 "name": "BaseBdev2", 00:08:38.095 "uuid": "80ad240b-2dfe-4b8f-a02a-3c667ab00b9c", 00:08:38.095 "is_configured": true, 00:08:38.095 "data_offset": 0, 00:08:38.095 "data_size": 65536 00:08:38.095 }, 00:08:38.095 { 00:08:38.095 "name": "BaseBdev3", 00:08:38.095 "uuid": "3c5a2d56-91b6-4ea6-a1bd-566e588f5bd5", 00:08:38.095 "is_configured": true, 00:08:38.095 "data_offset": 0, 00:08:38.095 "data_size": 65536 00:08:38.095 } 00:08:38.095 ] 00:08:38.095 }' 00:08:38.095 04:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.095 04:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.354 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:38.354 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.354 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.354 [2024-12-14 04:57:49.018723] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:38.354 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.354 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:38.354 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.354 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.354 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.354 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.354 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.354 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.354 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.354 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.354 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.354 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.354 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.354 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.354 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.354 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.354 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.354 "name": "Existed_Raid", 00:08:38.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.354 "strip_size_kb": 0, 00:08:38.355 "state": "configuring", 00:08:38.355 "raid_level": "raid1", 00:08:38.355 "superblock": false, 00:08:38.355 "num_base_bdevs": 3, 00:08:38.355 "num_base_bdevs_discovered": 1, 00:08:38.355 "num_base_bdevs_operational": 3, 00:08:38.355 "base_bdevs_list": [ 00:08:38.355 { 00:08:38.355 "name": "BaseBdev1", 00:08:38.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.355 "is_configured": false, 00:08:38.355 "data_offset": 0, 00:08:38.355 "data_size": 0 00:08:38.355 }, 00:08:38.355 { 00:08:38.355 "name": null, 00:08:38.355 "uuid": "80ad240b-2dfe-4b8f-a02a-3c667ab00b9c", 00:08:38.355 "is_configured": false, 00:08:38.355 "data_offset": 0, 00:08:38.355 "data_size": 65536 00:08:38.355 }, 00:08:38.355 { 00:08:38.355 "name": "BaseBdev3", 00:08:38.355 "uuid": "3c5a2d56-91b6-4ea6-a1bd-566e588f5bd5", 00:08:38.355 "is_configured": true, 00:08:38.355 "data_offset": 0, 00:08:38.355 "data_size": 65536 00:08:38.355 } 00:08:38.355 ] 00:08:38.355 }' 00:08:38.355 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.355 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.614 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.614 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:38.614 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.614 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.614 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.873 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:38.873 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:38.873 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.873 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.873 [2024-12-14 04:57:49.516759] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:38.873 BaseBdev1 00:08:38.873 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.873 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:38.873 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:38.873 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:38.873 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:38.873 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:38.873 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:38.873 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:38.873 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.873 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.873 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.873 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:38.873 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.873 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.873 [ 00:08:38.873 { 00:08:38.873 "name": "BaseBdev1", 00:08:38.873 "aliases": [ 00:08:38.873 "d7c21964-721a-4c02-9997-1eca57d2745f" 00:08:38.873 ], 00:08:38.873 "product_name": "Malloc disk", 00:08:38.873 "block_size": 512, 00:08:38.873 "num_blocks": 65536, 00:08:38.873 "uuid": "d7c21964-721a-4c02-9997-1eca57d2745f", 00:08:38.873 "assigned_rate_limits": { 00:08:38.873 "rw_ios_per_sec": 0, 00:08:38.873 "rw_mbytes_per_sec": 0, 00:08:38.873 "r_mbytes_per_sec": 0, 00:08:38.873 "w_mbytes_per_sec": 0 00:08:38.873 }, 00:08:38.873 "claimed": true, 00:08:38.873 "claim_type": "exclusive_write", 00:08:38.873 "zoned": false, 00:08:38.873 "supported_io_types": { 00:08:38.873 "read": true, 00:08:38.873 "write": true, 00:08:38.873 "unmap": true, 00:08:38.873 "flush": true, 00:08:38.873 "reset": true, 00:08:38.873 "nvme_admin": false, 00:08:38.873 "nvme_io": false, 00:08:38.873 "nvme_io_md": false, 00:08:38.873 "write_zeroes": true, 00:08:38.873 "zcopy": true, 00:08:38.873 "get_zone_info": false, 00:08:38.873 "zone_management": false, 00:08:38.873 "zone_append": false, 00:08:38.873 "compare": false, 00:08:38.873 "compare_and_write": false, 00:08:38.873 "abort": true, 00:08:38.873 "seek_hole": false, 00:08:38.873 "seek_data": false, 00:08:38.873 "copy": true, 00:08:38.873 "nvme_iov_md": false 00:08:38.873 }, 00:08:38.873 "memory_domains": [ 00:08:38.873 { 00:08:38.873 "dma_device_id": "system", 00:08:38.873 "dma_device_type": 1 00:08:38.873 }, 00:08:38.873 { 00:08:38.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.873 "dma_device_type": 2 00:08:38.873 } 00:08:38.873 ], 00:08:38.873 "driver_specific": {} 00:08:38.873 } 00:08:38.873 ] 00:08:38.873 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.874 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:38.874 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:38.874 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.874 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.874 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.874 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.874 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.874 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.874 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.874 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.874 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.874 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.874 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.874 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.874 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.874 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.874 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.874 "name": "Existed_Raid", 00:08:38.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.874 "strip_size_kb": 0, 00:08:38.874 "state": "configuring", 00:08:38.874 "raid_level": "raid1", 00:08:38.874 "superblock": false, 00:08:38.874 "num_base_bdevs": 3, 00:08:38.874 "num_base_bdevs_discovered": 2, 00:08:38.874 "num_base_bdevs_operational": 3, 00:08:38.874 "base_bdevs_list": [ 00:08:38.874 { 00:08:38.874 "name": "BaseBdev1", 00:08:38.874 "uuid": "d7c21964-721a-4c02-9997-1eca57d2745f", 00:08:38.874 "is_configured": true, 00:08:38.874 "data_offset": 0, 00:08:38.874 "data_size": 65536 00:08:38.874 }, 00:08:38.874 { 00:08:38.874 "name": null, 00:08:38.874 "uuid": "80ad240b-2dfe-4b8f-a02a-3c667ab00b9c", 00:08:38.874 "is_configured": false, 00:08:38.874 "data_offset": 0, 00:08:38.874 "data_size": 65536 00:08:38.874 }, 00:08:38.874 { 00:08:38.874 "name": "BaseBdev3", 00:08:38.874 "uuid": "3c5a2d56-91b6-4ea6-a1bd-566e588f5bd5", 00:08:38.874 "is_configured": true, 00:08:38.874 "data_offset": 0, 00:08:38.874 "data_size": 65536 00:08:38.874 } 00:08:38.874 ] 00:08:38.874 }' 00:08:38.874 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.874 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.133 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:39.133 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.133 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.133 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.133 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.133 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:39.133 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:39.133 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.133 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.133 [2024-12-14 04:57:49.991994] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:39.133 04:57:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.133 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:39.133 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.133 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.133 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.133 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.133 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.133 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.133 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.133 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.133 04:57:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.133 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.133 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.133 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.133 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.393 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.393 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.393 "name": "Existed_Raid", 00:08:39.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.393 "strip_size_kb": 0, 00:08:39.393 "state": "configuring", 00:08:39.393 "raid_level": "raid1", 00:08:39.393 "superblock": false, 00:08:39.393 "num_base_bdevs": 3, 00:08:39.393 "num_base_bdevs_discovered": 1, 00:08:39.393 "num_base_bdevs_operational": 3, 00:08:39.393 "base_bdevs_list": [ 00:08:39.393 { 00:08:39.393 "name": "BaseBdev1", 00:08:39.393 "uuid": "d7c21964-721a-4c02-9997-1eca57d2745f", 00:08:39.393 "is_configured": true, 00:08:39.393 "data_offset": 0, 00:08:39.393 "data_size": 65536 00:08:39.393 }, 00:08:39.393 { 00:08:39.393 "name": null, 00:08:39.393 "uuid": "80ad240b-2dfe-4b8f-a02a-3c667ab00b9c", 00:08:39.393 "is_configured": false, 00:08:39.393 "data_offset": 0, 00:08:39.393 "data_size": 65536 00:08:39.393 }, 00:08:39.393 { 00:08:39.393 "name": null, 00:08:39.393 "uuid": "3c5a2d56-91b6-4ea6-a1bd-566e588f5bd5", 00:08:39.393 "is_configured": false, 00:08:39.393 "data_offset": 0, 00:08:39.393 "data_size": 65536 00:08:39.393 } 00:08:39.393 ] 00:08:39.393 }' 00:08:39.393 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.393 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.652 [2024-12-14 04:57:50.475293] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.652 "name": "Existed_Raid", 00:08:39.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.652 "strip_size_kb": 0, 00:08:39.652 "state": "configuring", 00:08:39.652 "raid_level": "raid1", 00:08:39.652 "superblock": false, 00:08:39.652 "num_base_bdevs": 3, 00:08:39.652 "num_base_bdevs_discovered": 2, 00:08:39.652 "num_base_bdevs_operational": 3, 00:08:39.652 "base_bdevs_list": [ 00:08:39.652 { 00:08:39.652 "name": "BaseBdev1", 00:08:39.652 "uuid": "d7c21964-721a-4c02-9997-1eca57d2745f", 00:08:39.652 "is_configured": true, 00:08:39.652 "data_offset": 0, 00:08:39.652 "data_size": 65536 00:08:39.652 }, 00:08:39.652 { 00:08:39.652 "name": null, 00:08:39.652 "uuid": "80ad240b-2dfe-4b8f-a02a-3c667ab00b9c", 00:08:39.652 "is_configured": false, 00:08:39.652 "data_offset": 0, 00:08:39.652 "data_size": 65536 00:08:39.652 }, 00:08:39.652 { 00:08:39.652 "name": "BaseBdev3", 00:08:39.652 "uuid": "3c5a2d56-91b6-4ea6-a1bd-566e588f5bd5", 00:08:39.652 "is_configured": true, 00:08:39.652 "data_offset": 0, 00:08:39.652 "data_size": 65536 00:08:39.652 } 00:08:39.652 ] 00:08:39.652 }' 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.652 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.220 [2024-12-14 04:57:50.918519] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.220 "name": "Existed_Raid", 00:08:40.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.220 "strip_size_kb": 0, 00:08:40.220 "state": "configuring", 00:08:40.220 "raid_level": "raid1", 00:08:40.220 "superblock": false, 00:08:40.220 "num_base_bdevs": 3, 00:08:40.220 "num_base_bdevs_discovered": 1, 00:08:40.220 "num_base_bdevs_operational": 3, 00:08:40.220 "base_bdevs_list": [ 00:08:40.220 { 00:08:40.220 "name": null, 00:08:40.220 "uuid": "d7c21964-721a-4c02-9997-1eca57d2745f", 00:08:40.220 "is_configured": false, 00:08:40.220 "data_offset": 0, 00:08:40.220 "data_size": 65536 00:08:40.220 }, 00:08:40.220 { 00:08:40.220 "name": null, 00:08:40.220 "uuid": "80ad240b-2dfe-4b8f-a02a-3c667ab00b9c", 00:08:40.220 "is_configured": false, 00:08:40.220 "data_offset": 0, 00:08:40.220 "data_size": 65536 00:08:40.220 }, 00:08:40.220 { 00:08:40.220 "name": "BaseBdev3", 00:08:40.220 "uuid": "3c5a2d56-91b6-4ea6-a1bd-566e588f5bd5", 00:08:40.220 "is_configured": true, 00:08:40.220 "data_offset": 0, 00:08:40.220 "data_size": 65536 00:08:40.220 } 00:08:40.220 ] 00:08:40.220 }' 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.220 04:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.479 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.479 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.479 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:40.479 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.479 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.738 [2024-12-14 04:57:51.380243] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.738 "name": "Existed_Raid", 00:08:40.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.738 "strip_size_kb": 0, 00:08:40.738 "state": "configuring", 00:08:40.738 "raid_level": "raid1", 00:08:40.738 "superblock": false, 00:08:40.738 "num_base_bdevs": 3, 00:08:40.738 "num_base_bdevs_discovered": 2, 00:08:40.738 "num_base_bdevs_operational": 3, 00:08:40.738 "base_bdevs_list": [ 00:08:40.738 { 00:08:40.738 "name": null, 00:08:40.738 "uuid": "d7c21964-721a-4c02-9997-1eca57d2745f", 00:08:40.738 "is_configured": false, 00:08:40.738 "data_offset": 0, 00:08:40.738 "data_size": 65536 00:08:40.738 }, 00:08:40.738 { 00:08:40.738 "name": "BaseBdev2", 00:08:40.738 "uuid": "80ad240b-2dfe-4b8f-a02a-3c667ab00b9c", 00:08:40.738 "is_configured": true, 00:08:40.738 "data_offset": 0, 00:08:40.738 "data_size": 65536 00:08:40.738 }, 00:08:40.738 { 00:08:40.738 "name": "BaseBdev3", 00:08:40.738 "uuid": "3c5a2d56-91b6-4ea6-a1bd-566e588f5bd5", 00:08:40.738 "is_configured": true, 00:08:40.738 "data_offset": 0, 00:08:40.738 "data_size": 65536 00:08:40.738 } 00:08:40.738 ] 00:08:40.738 }' 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.738 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.996 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:40.996 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.996 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.996 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.996 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.996 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:40.996 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.996 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.996 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.996 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:40.996 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d7c21964-721a-4c02-9997-1eca57d2745f 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.255 [2024-12-14 04:57:51.898250] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:41.255 [2024-12-14 04:57:51.898293] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:41.255 [2024-12-14 04:57:51.898301] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:41.255 [2024-12-14 04:57:51.898546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:41.255 [2024-12-14 04:57:51.898681] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:41.255 [2024-12-14 04:57:51.898694] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:41.255 [2024-12-14 04:57:51.898871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.255 NewBaseBdev 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.255 [ 00:08:41.255 { 00:08:41.255 "name": "NewBaseBdev", 00:08:41.255 "aliases": [ 00:08:41.255 "d7c21964-721a-4c02-9997-1eca57d2745f" 00:08:41.255 ], 00:08:41.255 "product_name": "Malloc disk", 00:08:41.255 "block_size": 512, 00:08:41.255 "num_blocks": 65536, 00:08:41.255 "uuid": "d7c21964-721a-4c02-9997-1eca57d2745f", 00:08:41.255 "assigned_rate_limits": { 00:08:41.255 "rw_ios_per_sec": 0, 00:08:41.255 "rw_mbytes_per_sec": 0, 00:08:41.255 "r_mbytes_per_sec": 0, 00:08:41.255 "w_mbytes_per_sec": 0 00:08:41.255 }, 00:08:41.255 "claimed": true, 00:08:41.255 "claim_type": "exclusive_write", 00:08:41.255 "zoned": false, 00:08:41.255 "supported_io_types": { 00:08:41.255 "read": true, 00:08:41.255 "write": true, 00:08:41.255 "unmap": true, 00:08:41.255 "flush": true, 00:08:41.255 "reset": true, 00:08:41.255 "nvme_admin": false, 00:08:41.255 "nvme_io": false, 00:08:41.255 "nvme_io_md": false, 00:08:41.255 "write_zeroes": true, 00:08:41.255 "zcopy": true, 00:08:41.255 "get_zone_info": false, 00:08:41.255 "zone_management": false, 00:08:41.255 "zone_append": false, 00:08:41.255 "compare": false, 00:08:41.255 "compare_and_write": false, 00:08:41.255 "abort": true, 00:08:41.255 "seek_hole": false, 00:08:41.255 "seek_data": false, 00:08:41.255 "copy": true, 00:08:41.255 "nvme_iov_md": false 00:08:41.255 }, 00:08:41.255 "memory_domains": [ 00:08:41.255 { 00:08:41.255 "dma_device_id": "system", 00:08:41.255 "dma_device_type": 1 00:08:41.255 }, 00:08:41.255 { 00:08:41.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.255 "dma_device_type": 2 00:08:41.255 } 00:08:41.255 ], 00:08:41.255 "driver_specific": {} 00:08:41.255 } 00:08:41.255 ] 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.255 "name": "Existed_Raid", 00:08:41.255 "uuid": "ec8bb55f-c5e3-4677-b8a1-642c1f5e1113", 00:08:41.255 "strip_size_kb": 0, 00:08:41.255 "state": "online", 00:08:41.255 "raid_level": "raid1", 00:08:41.255 "superblock": false, 00:08:41.255 "num_base_bdevs": 3, 00:08:41.255 "num_base_bdevs_discovered": 3, 00:08:41.255 "num_base_bdevs_operational": 3, 00:08:41.255 "base_bdevs_list": [ 00:08:41.255 { 00:08:41.255 "name": "NewBaseBdev", 00:08:41.255 "uuid": "d7c21964-721a-4c02-9997-1eca57d2745f", 00:08:41.255 "is_configured": true, 00:08:41.255 "data_offset": 0, 00:08:41.255 "data_size": 65536 00:08:41.255 }, 00:08:41.255 { 00:08:41.255 "name": "BaseBdev2", 00:08:41.255 "uuid": "80ad240b-2dfe-4b8f-a02a-3c667ab00b9c", 00:08:41.255 "is_configured": true, 00:08:41.255 "data_offset": 0, 00:08:41.255 "data_size": 65536 00:08:41.255 }, 00:08:41.255 { 00:08:41.255 "name": "BaseBdev3", 00:08:41.255 "uuid": "3c5a2d56-91b6-4ea6-a1bd-566e588f5bd5", 00:08:41.255 "is_configured": true, 00:08:41.255 "data_offset": 0, 00:08:41.255 "data_size": 65536 00:08:41.255 } 00:08:41.255 ] 00:08:41.255 }' 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.255 04:57:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.515 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:41.515 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:41.515 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:41.515 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:41.515 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:41.515 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:41.515 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:41.515 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.515 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.515 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:41.515 [2024-12-14 04:57:52.325862] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.515 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.515 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:41.515 "name": "Existed_Raid", 00:08:41.515 "aliases": [ 00:08:41.515 "ec8bb55f-c5e3-4677-b8a1-642c1f5e1113" 00:08:41.515 ], 00:08:41.515 "product_name": "Raid Volume", 00:08:41.515 "block_size": 512, 00:08:41.515 "num_blocks": 65536, 00:08:41.515 "uuid": "ec8bb55f-c5e3-4677-b8a1-642c1f5e1113", 00:08:41.515 "assigned_rate_limits": { 00:08:41.515 "rw_ios_per_sec": 0, 00:08:41.515 "rw_mbytes_per_sec": 0, 00:08:41.515 "r_mbytes_per_sec": 0, 00:08:41.515 "w_mbytes_per_sec": 0 00:08:41.515 }, 00:08:41.515 "claimed": false, 00:08:41.515 "zoned": false, 00:08:41.515 "supported_io_types": { 00:08:41.515 "read": true, 00:08:41.515 "write": true, 00:08:41.515 "unmap": false, 00:08:41.515 "flush": false, 00:08:41.515 "reset": true, 00:08:41.515 "nvme_admin": false, 00:08:41.515 "nvme_io": false, 00:08:41.515 "nvme_io_md": false, 00:08:41.515 "write_zeroes": true, 00:08:41.515 "zcopy": false, 00:08:41.515 "get_zone_info": false, 00:08:41.515 "zone_management": false, 00:08:41.515 "zone_append": false, 00:08:41.515 "compare": false, 00:08:41.515 "compare_and_write": false, 00:08:41.515 "abort": false, 00:08:41.515 "seek_hole": false, 00:08:41.515 "seek_data": false, 00:08:41.515 "copy": false, 00:08:41.515 "nvme_iov_md": false 00:08:41.515 }, 00:08:41.515 "memory_domains": [ 00:08:41.515 { 00:08:41.515 "dma_device_id": "system", 00:08:41.515 "dma_device_type": 1 00:08:41.515 }, 00:08:41.515 { 00:08:41.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.515 "dma_device_type": 2 00:08:41.515 }, 00:08:41.515 { 00:08:41.515 "dma_device_id": "system", 00:08:41.515 "dma_device_type": 1 00:08:41.515 }, 00:08:41.515 { 00:08:41.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.515 "dma_device_type": 2 00:08:41.515 }, 00:08:41.515 { 00:08:41.515 "dma_device_id": "system", 00:08:41.515 "dma_device_type": 1 00:08:41.515 }, 00:08:41.515 { 00:08:41.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.515 "dma_device_type": 2 00:08:41.515 } 00:08:41.515 ], 00:08:41.515 "driver_specific": { 00:08:41.515 "raid": { 00:08:41.515 "uuid": "ec8bb55f-c5e3-4677-b8a1-642c1f5e1113", 00:08:41.515 "strip_size_kb": 0, 00:08:41.515 "state": "online", 00:08:41.515 "raid_level": "raid1", 00:08:41.515 "superblock": false, 00:08:41.515 "num_base_bdevs": 3, 00:08:41.515 "num_base_bdevs_discovered": 3, 00:08:41.515 "num_base_bdevs_operational": 3, 00:08:41.515 "base_bdevs_list": [ 00:08:41.515 { 00:08:41.515 "name": "NewBaseBdev", 00:08:41.515 "uuid": "d7c21964-721a-4c02-9997-1eca57d2745f", 00:08:41.515 "is_configured": true, 00:08:41.515 "data_offset": 0, 00:08:41.515 "data_size": 65536 00:08:41.515 }, 00:08:41.515 { 00:08:41.515 "name": "BaseBdev2", 00:08:41.515 "uuid": "80ad240b-2dfe-4b8f-a02a-3c667ab00b9c", 00:08:41.515 "is_configured": true, 00:08:41.515 "data_offset": 0, 00:08:41.515 "data_size": 65536 00:08:41.515 }, 00:08:41.515 { 00:08:41.515 "name": "BaseBdev3", 00:08:41.515 "uuid": "3c5a2d56-91b6-4ea6-a1bd-566e588f5bd5", 00:08:41.515 "is_configured": true, 00:08:41.515 "data_offset": 0, 00:08:41.515 "data_size": 65536 00:08:41.515 } 00:08:41.515 ] 00:08:41.515 } 00:08:41.515 } 00:08:41.515 }' 00:08:41.515 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:41.775 BaseBdev2 00:08:41.775 BaseBdev3' 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.775 [2024-12-14 04:57:52.565194] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:41.775 [2024-12-14 04:57:52.565284] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.775 [2024-12-14 04:57:52.565363] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.775 [2024-12-14 04:57:52.565625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.775 [2024-12-14 04:57:52.565638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78477 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 78477 ']' 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 78477 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78477 00:08:41.775 killing process with pid 78477 00:08:41.775 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:41.776 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:41.776 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78477' 00:08:41.776 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 78477 00:08:41.776 [2024-12-14 04:57:52.605756] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:41.776 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 78477 00:08:41.776 [2024-12-14 04:57:52.636959] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:42.035 ************************************ 00:08:42.035 END TEST raid_state_function_test 00:08:42.035 ************************************ 00:08:42.035 04:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:42.035 00:08:42.035 real 0m8.428s 00:08:42.035 user 0m14.371s 00:08:42.035 sys 0m1.645s 00:08:42.035 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.035 04:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.295 04:57:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:08:42.295 04:57:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:42.295 04:57:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.295 04:57:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:42.295 ************************************ 00:08:42.295 START TEST raid_state_function_test_sb 00:08:42.295 ************************************ 00:08:42.295 04:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:08:42.295 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=79075 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79075' 00:08:42.296 Process raid pid: 79075 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 79075 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 79075 ']' 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:42.296 04:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.296 [2024-12-14 04:57:53.043692] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:42.296 [2024-12-14 04:57:53.043826] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.555 [2024-12-14 04:57:53.205075] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.555 [2024-12-14 04:57:53.250466] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.555 [2024-12-14 04:57:53.292284] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.555 [2024-12-14 04:57:53.292337] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.124 [2024-12-14 04:57:53.865684] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:43.124 [2024-12-14 04:57:53.865738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:43.124 [2024-12-14 04:57:53.865750] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:43.124 [2024-12-14 04:57:53.865760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:43.124 [2024-12-14 04:57:53.865765] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:43.124 [2024-12-14 04:57:53.865776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.124 "name": "Existed_Raid", 00:08:43.124 "uuid": "906a77b2-58b7-443b-b33a-1b7d232f40da", 00:08:43.124 "strip_size_kb": 0, 00:08:43.124 "state": "configuring", 00:08:43.124 "raid_level": "raid1", 00:08:43.124 "superblock": true, 00:08:43.124 "num_base_bdevs": 3, 00:08:43.124 "num_base_bdevs_discovered": 0, 00:08:43.124 "num_base_bdevs_operational": 3, 00:08:43.124 "base_bdevs_list": [ 00:08:43.124 { 00:08:43.124 "name": "BaseBdev1", 00:08:43.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.124 "is_configured": false, 00:08:43.124 "data_offset": 0, 00:08:43.124 "data_size": 0 00:08:43.124 }, 00:08:43.124 { 00:08:43.124 "name": "BaseBdev2", 00:08:43.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.124 "is_configured": false, 00:08:43.124 "data_offset": 0, 00:08:43.124 "data_size": 0 00:08:43.124 }, 00:08:43.124 { 00:08:43.124 "name": "BaseBdev3", 00:08:43.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.124 "is_configured": false, 00:08:43.124 "data_offset": 0, 00:08:43.124 "data_size": 0 00:08:43.124 } 00:08:43.124 ] 00:08:43.124 }' 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.124 04:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.692 [2024-12-14 04:57:54.312814] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:43.692 [2024-12-14 04:57:54.312899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.692 [2024-12-14 04:57:54.324828] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:43.692 [2024-12-14 04:57:54.324905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:43.692 [2024-12-14 04:57:54.324934] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:43.692 [2024-12-14 04:57:54.324957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:43.692 [2024-12-14 04:57:54.324980] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:43.692 [2024-12-14 04:57:54.325040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.692 [2024-12-14 04:57:54.345575] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:43.692 BaseBdev1 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:43.692 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.693 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.693 [ 00:08:43.693 { 00:08:43.693 "name": "BaseBdev1", 00:08:43.693 "aliases": [ 00:08:43.693 "92926f00-62ec-4c10-86e1-be12c7a22bff" 00:08:43.693 ], 00:08:43.693 "product_name": "Malloc disk", 00:08:43.693 "block_size": 512, 00:08:43.693 "num_blocks": 65536, 00:08:43.693 "uuid": "92926f00-62ec-4c10-86e1-be12c7a22bff", 00:08:43.693 "assigned_rate_limits": { 00:08:43.693 "rw_ios_per_sec": 0, 00:08:43.693 "rw_mbytes_per_sec": 0, 00:08:43.693 "r_mbytes_per_sec": 0, 00:08:43.693 "w_mbytes_per_sec": 0 00:08:43.693 }, 00:08:43.693 "claimed": true, 00:08:43.693 "claim_type": "exclusive_write", 00:08:43.693 "zoned": false, 00:08:43.693 "supported_io_types": { 00:08:43.693 "read": true, 00:08:43.693 "write": true, 00:08:43.693 "unmap": true, 00:08:43.693 "flush": true, 00:08:43.693 "reset": true, 00:08:43.693 "nvme_admin": false, 00:08:43.693 "nvme_io": false, 00:08:43.693 "nvme_io_md": false, 00:08:43.693 "write_zeroes": true, 00:08:43.693 "zcopy": true, 00:08:43.693 "get_zone_info": false, 00:08:43.693 "zone_management": false, 00:08:43.693 "zone_append": false, 00:08:43.693 "compare": false, 00:08:43.693 "compare_and_write": false, 00:08:43.693 "abort": true, 00:08:43.693 "seek_hole": false, 00:08:43.693 "seek_data": false, 00:08:43.693 "copy": true, 00:08:43.693 "nvme_iov_md": false 00:08:43.693 }, 00:08:43.693 "memory_domains": [ 00:08:43.693 { 00:08:43.693 "dma_device_id": "system", 00:08:43.693 "dma_device_type": 1 00:08:43.693 }, 00:08:43.693 { 00:08:43.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.693 "dma_device_type": 2 00:08:43.693 } 00:08:43.693 ], 00:08:43.693 "driver_specific": {} 00:08:43.693 } 00:08:43.693 ] 00:08:43.693 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.693 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:43.693 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:43.693 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.693 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.693 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.693 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.693 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.693 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.693 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.693 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.693 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.693 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.693 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.693 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.693 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.693 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.693 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.693 "name": "Existed_Raid", 00:08:43.693 "uuid": "ff657d10-f36f-472b-ad42-4774974a42c5", 00:08:43.693 "strip_size_kb": 0, 00:08:43.693 "state": "configuring", 00:08:43.693 "raid_level": "raid1", 00:08:43.693 "superblock": true, 00:08:43.693 "num_base_bdevs": 3, 00:08:43.693 "num_base_bdevs_discovered": 1, 00:08:43.693 "num_base_bdevs_operational": 3, 00:08:43.693 "base_bdevs_list": [ 00:08:43.693 { 00:08:43.693 "name": "BaseBdev1", 00:08:43.693 "uuid": "92926f00-62ec-4c10-86e1-be12c7a22bff", 00:08:43.693 "is_configured": true, 00:08:43.693 "data_offset": 2048, 00:08:43.693 "data_size": 63488 00:08:43.693 }, 00:08:43.693 { 00:08:43.693 "name": "BaseBdev2", 00:08:43.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.693 "is_configured": false, 00:08:43.693 "data_offset": 0, 00:08:43.693 "data_size": 0 00:08:43.693 }, 00:08:43.693 { 00:08:43.693 "name": "BaseBdev3", 00:08:43.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.693 "is_configured": false, 00:08:43.693 "data_offset": 0, 00:08:43.693 "data_size": 0 00:08:43.693 } 00:08:43.693 ] 00:08:43.693 }' 00:08:43.693 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.693 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.263 [2024-12-14 04:57:54.848748] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:44.263 [2024-12-14 04:57:54.848853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.263 [2024-12-14 04:57:54.860765] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:44.263 [2024-12-14 04:57:54.862538] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:44.263 [2024-12-14 04:57:54.862622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:44.263 [2024-12-14 04:57:54.862640] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:44.263 [2024-12-14 04:57:54.862653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.263 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.263 "name": "Existed_Raid", 00:08:44.263 "uuid": "504ae1dc-f62a-469b-9834-0a03e5b0240d", 00:08:44.263 "strip_size_kb": 0, 00:08:44.263 "state": "configuring", 00:08:44.263 "raid_level": "raid1", 00:08:44.263 "superblock": true, 00:08:44.263 "num_base_bdevs": 3, 00:08:44.263 "num_base_bdevs_discovered": 1, 00:08:44.263 "num_base_bdevs_operational": 3, 00:08:44.263 "base_bdevs_list": [ 00:08:44.263 { 00:08:44.263 "name": "BaseBdev1", 00:08:44.263 "uuid": "92926f00-62ec-4c10-86e1-be12c7a22bff", 00:08:44.263 "is_configured": true, 00:08:44.263 "data_offset": 2048, 00:08:44.263 "data_size": 63488 00:08:44.263 }, 00:08:44.263 { 00:08:44.263 "name": "BaseBdev2", 00:08:44.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.263 "is_configured": false, 00:08:44.263 "data_offset": 0, 00:08:44.263 "data_size": 0 00:08:44.263 }, 00:08:44.263 { 00:08:44.263 "name": "BaseBdev3", 00:08:44.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.264 "is_configured": false, 00:08:44.264 "data_offset": 0, 00:08:44.264 "data_size": 0 00:08:44.264 } 00:08:44.264 ] 00:08:44.264 }' 00:08:44.264 04:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.264 04:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.530 [2024-12-14 04:57:55.316806] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:44.530 BaseBdev2 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.530 [ 00:08:44.530 { 00:08:44.530 "name": "BaseBdev2", 00:08:44.530 "aliases": [ 00:08:44.530 "c554b9e3-74f1-4ad0-8fd5-6349536076d6" 00:08:44.530 ], 00:08:44.530 "product_name": "Malloc disk", 00:08:44.530 "block_size": 512, 00:08:44.530 "num_blocks": 65536, 00:08:44.530 "uuid": "c554b9e3-74f1-4ad0-8fd5-6349536076d6", 00:08:44.530 "assigned_rate_limits": { 00:08:44.530 "rw_ios_per_sec": 0, 00:08:44.530 "rw_mbytes_per_sec": 0, 00:08:44.530 "r_mbytes_per_sec": 0, 00:08:44.530 "w_mbytes_per_sec": 0 00:08:44.530 }, 00:08:44.530 "claimed": true, 00:08:44.530 "claim_type": "exclusive_write", 00:08:44.530 "zoned": false, 00:08:44.530 "supported_io_types": { 00:08:44.530 "read": true, 00:08:44.530 "write": true, 00:08:44.530 "unmap": true, 00:08:44.530 "flush": true, 00:08:44.530 "reset": true, 00:08:44.530 "nvme_admin": false, 00:08:44.530 "nvme_io": false, 00:08:44.530 "nvme_io_md": false, 00:08:44.530 "write_zeroes": true, 00:08:44.530 "zcopy": true, 00:08:44.530 "get_zone_info": false, 00:08:44.530 "zone_management": false, 00:08:44.530 "zone_append": false, 00:08:44.530 "compare": false, 00:08:44.530 "compare_and_write": false, 00:08:44.530 "abort": true, 00:08:44.530 "seek_hole": false, 00:08:44.530 "seek_data": false, 00:08:44.530 "copy": true, 00:08:44.530 "nvme_iov_md": false 00:08:44.530 }, 00:08:44.530 "memory_domains": [ 00:08:44.530 { 00:08:44.530 "dma_device_id": "system", 00:08:44.530 "dma_device_type": 1 00:08:44.530 }, 00:08:44.530 { 00:08:44.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.530 "dma_device_type": 2 00:08:44.530 } 00:08:44.530 ], 00:08:44.530 "driver_specific": {} 00:08:44.530 } 00:08:44.530 ] 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.530 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.530 "name": "Existed_Raid", 00:08:44.530 "uuid": "504ae1dc-f62a-469b-9834-0a03e5b0240d", 00:08:44.530 "strip_size_kb": 0, 00:08:44.530 "state": "configuring", 00:08:44.530 "raid_level": "raid1", 00:08:44.530 "superblock": true, 00:08:44.530 "num_base_bdevs": 3, 00:08:44.530 "num_base_bdevs_discovered": 2, 00:08:44.530 "num_base_bdevs_operational": 3, 00:08:44.530 "base_bdevs_list": [ 00:08:44.530 { 00:08:44.530 "name": "BaseBdev1", 00:08:44.530 "uuid": "92926f00-62ec-4c10-86e1-be12c7a22bff", 00:08:44.530 "is_configured": true, 00:08:44.530 "data_offset": 2048, 00:08:44.530 "data_size": 63488 00:08:44.530 }, 00:08:44.530 { 00:08:44.530 "name": "BaseBdev2", 00:08:44.530 "uuid": "c554b9e3-74f1-4ad0-8fd5-6349536076d6", 00:08:44.530 "is_configured": true, 00:08:44.530 "data_offset": 2048, 00:08:44.530 "data_size": 63488 00:08:44.530 }, 00:08:44.530 { 00:08:44.530 "name": "BaseBdev3", 00:08:44.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.530 "is_configured": false, 00:08:44.530 "data_offset": 0, 00:08:44.530 "data_size": 0 00:08:44.530 } 00:08:44.530 ] 00:08:44.530 }' 00:08:44.789 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.789 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.048 [2024-12-14 04:57:55.794858] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:45.048 [2024-12-14 04:57:55.795166] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:45.048 [2024-12-14 04:57:55.795204] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:45.048 [2024-12-14 04:57:55.795518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:45.048 BaseBdev3 00:08:45.048 [2024-12-14 04:57:55.795653] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:45.048 [2024-12-14 04:57:55.795664] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:45.048 [2024-12-14 04:57:55.795804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.048 [ 00:08:45.048 { 00:08:45.048 "name": "BaseBdev3", 00:08:45.048 "aliases": [ 00:08:45.048 "6b049108-b5d2-4cbb-a02b-9e447a3f66dd" 00:08:45.048 ], 00:08:45.048 "product_name": "Malloc disk", 00:08:45.048 "block_size": 512, 00:08:45.048 "num_blocks": 65536, 00:08:45.048 "uuid": "6b049108-b5d2-4cbb-a02b-9e447a3f66dd", 00:08:45.048 "assigned_rate_limits": { 00:08:45.048 "rw_ios_per_sec": 0, 00:08:45.048 "rw_mbytes_per_sec": 0, 00:08:45.048 "r_mbytes_per_sec": 0, 00:08:45.048 "w_mbytes_per_sec": 0 00:08:45.048 }, 00:08:45.048 "claimed": true, 00:08:45.048 "claim_type": "exclusive_write", 00:08:45.048 "zoned": false, 00:08:45.048 "supported_io_types": { 00:08:45.048 "read": true, 00:08:45.048 "write": true, 00:08:45.048 "unmap": true, 00:08:45.048 "flush": true, 00:08:45.048 "reset": true, 00:08:45.048 "nvme_admin": false, 00:08:45.048 "nvme_io": false, 00:08:45.048 "nvme_io_md": false, 00:08:45.048 "write_zeroes": true, 00:08:45.048 "zcopy": true, 00:08:45.048 "get_zone_info": false, 00:08:45.048 "zone_management": false, 00:08:45.048 "zone_append": false, 00:08:45.048 "compare": false, 00:08:45.048 "compare_and_write": false, 00:08:45.048 "abort": true, 00:08:45.048 "seek_hole": false, 00:08:45.048 "seek_data": false, 00:08:45.048 "copy": true, 00:08:45.048 "nvme_iov_md": false 00:08:45.048 }, 00:08:45.048 "memory_domains": [ 00:08:45.048 { 00:08:45.048 "dma_device_id": "system", 00:08:45.048 "dma_device_type": 1 00:08:45.048 }, 00:08:45.048 { 00:08:45.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.048 "dma_device_type": 2 00:08:45.048 } 00:08:45.048 ], 00:08:45.048 "driver_specific": {} 00:08:45.048 } 00:08:45.048 ] 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.048 "name": "Existed_Raid", 00:08:45.048 "uuid": "504ae1dc-f62a-469b-9834-0a03e5b0240d", 00:08:45.048 "strip_size_kb": 0, 00:08:45.048 "state": "online", 00:08:45.048 "raid_level": "raid1", 00:08:45.048 "superblock": true, 00:08:45.048 "num_base_bdevs": 3, 00:08:45.048 "num_base_bdevs_discovered": 3, 00:08:45.048 "num_base_bdevs_operational": 3, 00:08:45.048 "base_bdevs_list": [ 00:08:45.048 { 00:08:45.048 "name": "BaseBdev1", 00:08:45.048 "uuid": "92926f00-62ec-4c10-86e1-be12c7a22bff", 00:08:45.048 "is_configured": true, 00:08:45.048 "data_offset": 2048, 00:08:45.048 "data_size": 63488 00:08:45.048 }, 00:08:45.048 { 00:08:45.048 "name": "BaseBdev2", 00:08:45.048 "uuid": "c554b9e3-74f1-4ad0-8fd5-6349536076d6", 00:08:45.048 "is_configured": true, 00:08:45.048 "data_offset": 2048, 00:08:45.048 "data_size": 63488 00:08:45.048 }, 00:08:45.048 { 00:08:45.048 "name": "BaseBdev3", 00:08:45.048 "uuid": "6b049108-b5d2-4cbb-a02b-9e447a3f66dd", 00:08:45.048 "is_configured": true, 00:08:45.048 "data_offset": 2048, 00:08:45.048 "data_size": 63488 00:08:45.048 } 00:08:45.048 ] 00:08:45.048 }' 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.048 04:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.617 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:45.617 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:45.617 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:45.617 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:45.617 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:45.617 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:45.617 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:45.617 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:45.617 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.617 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.617 [2024-12-14 04:57:56.250459] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.617 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.617 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:45.617 "name": "Existed_Raid", 00:08:45.617 "aliases": [ 00:08:45.617 "504ae1dc-f62a-469b-9834-0a03e5b0240d" 00:08:45.617 ], 00:08:45.617 "product_name": "Raid Volume", 00:08:45.617 "block_size": 512, 00:08:45.617 "num_blocks": 63488, 00:08:45.617 "uuid": "504ae1dc-f62a-469b-9834-0a03e5b0240d", 00:08:45.617 "assigned_rate_limits": { 00:08:45.617 "rw_ios_per_sec": 0, 00:08:45.617 "rw_mbytes_per_sec": 0, 00:08:45.617 "r_mbytes_per_sec": 0, 00:08:45.617 "w_mbytes_per_sec": 0 00:08:45.617 }, 00:08:45.617 "claimed": false, 00:08:45.617 "zoned": false, 00:08:45.617 "supported_io_types": { 00:08:45.617 "read": true, 00:08:45.617 "write": true, 00:08:45.617 "unmap": false, 00:08:45.617 "flush": false, 00:08:45.617 "reset": true, 00:08:45.617 "nvme_admin": false, 00:08:45.617 "nvme_io": false, 00:08:45.617 "nvme_io_md": false, 00:08:45.617 "write_zeroes": true, 00:08:45.617 "zcopy": false, 00:08:45.617 "get_zone_info": false, 00:08:45.617 "zone_management": false, 00:08:45.617 "zone_append": false, 00:08:45.617 "compare": false, 00:08:45.617 "compare_and_write": false, 00:08:45.617 "abort": false, 00:08:45.617 "seek_hole": false, 00:08:45.617 "seek_data": false, 00:08:45.617 "copy": false, 00:08:45.617 "nvme_iov_md": false 00:08:45.617 }, 00:08:45.617 "memory_domains": [ 00:08:45.617 { 00:08:45.617 "dma_device_id": "system", 00:08:45.617 "dma_device_type": 1 00:08:45.617 }, 00:08:45.617 { 00:08:45.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.617 "dma_device_type": 2 00:08:45.617 }, 00:08:45.617 { 00:08:45.617 "dma_device_id": "system", 00:08:45.617 "dma_device_type": 1 00:08:45.617 }, 00:08:45.617 { 00:08:45.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.617 "dma_device_type": 2 00:08:45.617 }, 00:08:45.617 { 00:08:45.617 "dma_device_id": "system", 00:08:45.617 "dma_device_type": 1 00:08:45.617 }, 00:08:45.617 { 00:08:45.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.617 "dma_device_type": 2 00:08:45.617 } 00:08:45.617 ], 00:08:45.617 "driver_specific": { 00:08:45.617 "raid": { 00:08:45.617 "uuid": "504ae1dc-f62a-469b-9834-0a03e5b0240d", 00:08:45.617 "strip_size_kb": 0, 00:08:45.617 "state": "online", 00:08:45.617 "raid_level": "raid1", 00:08:45.617 "superblock": true, 00:08:45.617 "num_base_bdevs": 3, 00:08:45.617 "num_base_bdevs_discovered": 3, 00:08:45.617 "num_base_bdevs_operational": 3, 00:08:45.617 "base_bdevs_list": [ 00:08:45.617 { 00:08:45.617 "name": "BaseBdev1", 00:08:45.617 "uuid": "92926f00-62ec-4c10-86e1-be12c7a22bff", 00:08:45.617 "is_configured": true, 00:08:45.617 "data_offset": 2048, 00:08:45.617 "data_size": 63488 00:08:45.617 }, 00:08:45.617 { 00:08:45.617 "name": "BaseBdev2", 00:08:45.617 "uuid": "c554b9e3-74f1-4ad0-8fd5-6349536076d6", 00:08:45.617 "is_configured": true, 00:08:45.617 "data_offset": 2048, 00:08:45.617 "data_size": 63488 00:08:45.617 }, 00:08:45.617 { 00:08:45.617 "name": "BaseBdev3", 00:08:45.617 "uuid": "6b049108-b5d2-4cbb-a02b-9e447a3f66dd", 00:08:45.617 "is_configured": true, 00:08:45.617 "data_offset": 2048, 00:08:45.617 "data_size": 63488 00:08:45.617 } 00:08:45.617 ] 00:08:45.617 } 00:08:45.617 } 00:08:45.617 }' 00:08:45.617 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:45.617 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:45.618 BaseBdev2 00:08:45.618 BaseBdev3' 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.618 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.878 [2024-12-14 04:57:56.505758] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.878 "name": "Existed_Raid", 00:08:45.878 "uuid": "504ae1dc-f62a-469b-9834-0a03e5b0240d", 00:08:45.878 "strip_size_kb": 0, 00:08:45.878 "state": "online", 00:08:45.878 "raid_level": "raid1", 00:08:45.878 "superblock": true, 00:08:45.878 "num_base_bdevs": 3, 00:08:45.878 "num_base_bdevs_discovered": 2, 00:08:45.878 "num_base_bdevs_operational": 2, 00:08:45.878 "base_bdevs_list": [ 00:08:45.878 { 00:08:45.878 "name": null, 00:08:45.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.878 "is_configured": false, 00:08:45.878 "data_offset": 0, 00:08:45.878 "data_size": 63488 00:08:45.878 }, 00:08:45.878 { 00:08:45.878 "name": "BaseBdev2", 00:08:45.878 "uuid": "c554b9e3-74f1-4ad0-8fd5-6349536076d6", 00:08:45.878 "is_configured": true, 00:08:45.878 "data_offset": 2048, 00:08:45.878 "data_size": 63488 00:08:45.878 }, 00:08:45.878 { 00:08:45.878 "name": "BaseBdev3", 00:08:45.878 "uuid": "6b049108-b5d2-4cbb-a02b-9e447a3f66dd", 00:08:45.878 "is_configured": true, 00:08:45.878 "data_offset": 2048, 00:08:45.878 "data_size": 63488 00:08:45.878 } 00:08:45.878 ] 00:08:45.878 }' 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.878 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.138 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:46.138 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:46.138 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:46.138 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.138 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.138 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.138 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.138 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:46.138 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:46.138 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:46.138 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.138 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.138 [2024-12-14 04:57:56.988294] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:46.138 04:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.138 04:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:46.138 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:46.138 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.138 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:46.138 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.138 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.398 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.398 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:46.398 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:46.398 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:46.398 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.398 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.398 [2024-12-14 04:57:57.059489] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:46.398 [2024-12-14 04:57:57.059633] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.398 [2024-12-14 04:57:57.071240] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.398 [2024-12-14 04:57:57.071339] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.398 [2024-12-14 04:57:57.071393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:46.398 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.398 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:46.398 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:46.398 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.398 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.398 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.398 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.399 BaseBdev2 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.399 [ 00:08:46.399 { 00:08:46.399 "name": "BaseBdev2", 00:08:46.399 "aliases": [ 00:08:46.399 "08f7d2bd-6417-4f2c-956f-c4dae932a8b5" 00:08:46.399 ], 00:08:46.399 "product_name": "Malloc disk", 00:08:46.399 "block_size": 512, 00:08:46.399 "num_blocks": 65536, 00:08:46.399 "uuid": "08f7d2bd-6417-4f2c-956f-c4dae932a8b5", 00:08:46.399 "assigned_rate_limits": { 00:08:46.399 "rw_ios_per_sec": 0, 00:08:46.399 "rw_mbytes_per_sec": 0, 00:08:46.399 "r_mbytes_per_sec": 0, 00:08:46.399 "w_mbytes_per_sec": 0 00:08:46.399 }, 00:08:46.399 "claimed": false, 00:08:46.399 "zoned": false, 00:08:46.399 "supported_io_types": { 00:08:46.399 "read": true, 00:08:46.399 "write": true, 00:08:46.399 "unmap": true, 00:08:46.399 "flush": true, 00:08:46.399 "reset": true, 00:08:46.399 "nvme_admin": false, 00:08:46.399 "nvme_io": false, 00:08:46.399 "nvme_io_md": false, 00:08:46.399 "write_zeroes": true, 00:08:46.399 "zcopy": true, 00:08:46.399 "get_zone_info": false, 00:08:46.399 "zone_management": false, 00:08:46.399 "zone_append": false, 00:08:46.399 "compare": false, 00:08:46.399 "compare_and_write": false, 00:08:46.399 "abort": true, 00:08:46.399 "seek_hole": false, 00:08:46.399 "seek_data": false, 00:08:46.399 "copy": true, 00:08:46.399 "nvme_iov_md": false 00:08:46.399 }, 00:08:46.399 "memory_domains": [ 00:08:46.399 { 00:08:46.399 "dma_device_id": "system", 00:08:46.399 "dma_device_type": 1 00:08:46.399 }, 00:08:46.399 { 00:08:46.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.399 "dma_device_type": 2 00:08:46.399 } 00:08:46.399 ], 00:08:46.399 "driver_specific": {} 00:08:46.399 } 00:08:46.399 ] 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.399 BaseBdev3 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.399 [ 00:08:46.399 { 00:08:46.399 "name": "BaseBdev3", 00:08:46.399 "aliases": [ 00:08:46.399 "3a490e98-91fa-4b6b-8c9e-7927241edfbf" 00:08:46.399 ], 00:08:46.399 "product_name": "Malloc disk", 00:08:46.399 "block_size": 512, 00:08:46.399 "num_blocks": 65536, 00:08:46.399 "uuid": "3a490e98-91fa-4b6b-8c9e-7927241edfbf", 00:08:46.399 "assigned_rate_limits": { 00:08:46.399 "rw_ios_per_sec": 0, 00:08:46.399 "rw_mbytes_per_sec": 0, 00:08:46.399 "r_mbytes_per_sec": 0, 00:08:46.399 "w_mbytes_per_sec": 0 00:08:46.399 }, 00:08:46.399 "claimed": false, 00:08:46.399 "zoned": false, 00:08:46.399 "supported_io_types": { 00:08:46.399 "read": true, 00:08:46.399 "write": true, 00:08:46.399 "unmap": true, 00:08:46.399 "flush": true, 00:08:46.399 "reset": true, 00:08:46.399 "nvme_admin": false, 00:08:46.399 "nvme_io": false, 00:08:46.399 "nvme_io_md": false, 00:08:46.399 "write_zeroes": true, 00:08:46.399 "zcopy": true, 00:08:46.399 "get_zone_info": false, 00:08:46.399 "zone_management": false, 00:08:46.399 "zone_append": false, 00:08:46.399 "compare": false, 00:08:46.399 "compare_and_write": false, 00:08:46.399 "abort": true, 00:08:46.399 "seek_hole": false, 00:08:46.399 "seek_data": false, 00:08:46.399 "copy": true, 00:08:46.399 "nvme_iov_md": false 00:08:46.399 }, 00:08:46.399 "memory_domains": [ 00:08:46.399 { 00:08:46.399 "dma_device_id": "system", 00:08:46.399 "dma_device_type": 1 00:08:46.399 }, 00:08:46.399 { 00:08:46.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.399 "dma_device_type": 2 00:08:46.399 } 00:08:46.399 ], 00:08:46.399 "driver_specific": {} 00:08:46.399 } 00:08:46.399 ] 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.399 [2024-12-14 04:57:57.242203] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:46.399 [2024-12-14 04:57:57.242245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:46.399 [2024-12-14 04:57:57.242279] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:46.399 [2024-12-14 04:57:57.244110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.399 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.659 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.659 "name": "Existed_Raid", 00:08:46.659 "uuid": "dc2bd974-6425-42ec-ae83-7c2f133ac927", 00:08:46.659 "strip_size_kb": 0, 00:08:46.659 "state": "configuring", 00:08:46.659 "raid_level": "raid1", 00:08:46.659 "superblock": true, 00:08:46.659 "num_base_bdevs": 3, 00:08:46.659 "num_base_bdevs_discovered": 2, 00:08:46.659 "num_base_bdevs_operational": 3, 00:08:46.659 "base_bdevs_list": [ 00:08:46.659 { 00:08:46.659 "name": "BaseBdev1", 00:08:46.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.659 "is_configured": false, 00:08:46.659 "data_offset": 0, 00:08:46.659 "data_size": 0 00:08:46.659 }, 00:08:46.659 { 00:08:46.659 "name": "BaseBdev2", 00:08:46.659 "uuid": "08f7d2bd-6417-4f2c-956f-c4dae932a8b5", 00:08:46.659 "is_configured": true, 00:08:46.659 "data_offset": 2048, 00:08:46.659 "data_size": 63488 00:08:46.659 }, 00:08:46.659 { 00:08:46.659 "name": "BaseBdev3", 00:08:46.659 "uuid": "3a490e98-91fa-4b6b-8c9e-7927241edfbf", 00:08:46.659 "is_configured": true, 00:08:46.659 "data_offset": 2048, 00:08:46.659 "data_size": 63488 00:08:46.659 } 00:08:46.659 ] 00:08:46.659 }' 00:08:46.659 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.659 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.918 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:46.918 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.918 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.918 [2024-12-14 04:57:57.657446] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:46.918 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.918 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:46.918 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.918 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.918 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.918 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.918 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.918 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.918 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.918 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.918 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.918 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.918 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.918 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.918 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.918 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.918 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.918 "name": "Existed_Raid", 00:08:46.918 "uuid": "dc2bd974-6425-42ec-ae83-7c2f133ac927", 00:08:46.918 "strip_size_kb": 0, 00:08:46.918 "state": "configuring", 00:08:46.918 "raid_level": "raid1", 00:08:46.918 "superblock": true, 00:08:46.918 "num_base_bdevs": 3, 00:08:46.918 "num_base_bdevs_discovered": 1, 00:08:46.918 "num_base_bdevs_operational": 3, 00:08:46.918 "base_bdevs_list": [ 00:08:46.918 { 00:08:46.918 "name": "BaseBdev1", 00:08:46.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.918 "is_configured": false, 00:08:46.918 "data_offset": 0, 00:08:46.918 "data_size": 0 00:08:46.918 }, 00:08:46.918 { 00:08:46.918 "name": null, 00:08:46.918 "uuid": "08f7d2bd-6417-4f2c-956f-c4dae932a8b5", 00:08:46.918 "is_configured": false, 00:08:46.918 "data_offset": 0, 00:08:46.918 "data_size": 63488 00:08:46.918 }, 00:08:46.918 { 00:08:46.918 "name": "BaseBdev3", 00:08:46.918 "uuid": "3a490e98-91fa-4b6b-8c9e-7927241edfbf", 00:08:46.918 "is_configured": true, 00:08:46.918 "data_offset": 2048, 00:08:46.918 "data_size": 63488 00:08:46.918 } 00:08:46.918 ] 00:08:46.918 }' 00:08:46.918 04:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.918 04:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.486 [2024-12-14 04:57:58.175504] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.486 BaseBdev1 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.486 [ 00:08:47.486 { 00:08:47.486 "name": "BaseBdev1", 00:08:47.486 "aliases": [ 00:08:47.486 "0db2028d-6341-4b0b-b59e-2e9026ee1e3d" 00:08:47.486 ], 00:08:47.486 "product_name": "Malloc disk", 00:08:47.486 "block_size": 512, 00:08:47.486 "num_blocks": 65536, 00:08:47.486 "uuid": "0db2028d-6341-4b0b-b59e-2e9026ee1e3d", 00:08:47.486 "assigned_rate_limits": { 00:08:47.486 "rw_ios_per_sec": 0, 00:08:47.486 "rw_mbytes_per_sec": 0, 00:08:47.486 "r_mbytes_per_sec": 0, 00:08:47.486 "w_mbytes_per_sec": 0 00:08:47.486 }, 00:08:47.486 "claimed": true, 00:08:47.486 "claim_type": "exclusive_write", 00:08:47.486 "zoned": false, 00:08:47.486 "supported_io_types": { 00:08:47.486 "read": true, 00:08:47.486 "write": true, 00:08:47.486 "unmap": true, 00:08:47.486 "flush": true, 00:08:47.486 "reset": true, 00:08:47.486 "nvme_admin": false, 00:08:47.486 "nvme_io": false, 00:08:47.486 "nvme_io_md": false, 00:08:47.486 "write_zeroes": true, 00:08:47.486 "zcopy": true, 00:08:47.486 "get_zone_info": false, 00:08:47.486 "zone_management": false, 00:08:47.486 "zone_append": false, 00:08:47.486 "compare": false, 00:08:47.486 "compare_and_write": false, 00:08:47.486 "abort": true, 00:08:47.486 "seek_hole": false, 00:08:47.486 "seek_data": false, 00:08:47.486 "copy": true, 00:08:47.486 "nvme_iov_md": false 00:08:47.486 }, 00:08:47.486 "memory_domains": [ 00:08:47.486 { 00:08:47.486 "dma_device_id": "system", 00:08:47.486 "dma_device_type": 1 00:08:47.486 }, 00:08:47.486 { 00:08:47.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.486 "dma_device_type": 2 00:08:47.486 } 00:08:47.486 ], 00:08:47.486 "driver_specific": {} 00:08:47.486 } 00:08:47.486 ] 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.486 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.487 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.487 "name": "Existed_Raid", 00:08:47.487 "uuid": "dc2bd974-6425-42ec-ae83-7c2f133ac927", 00:08:47.487 "strip_size_kb": 0, 00:08:47.487 "state": "configuring", 00:08:47.487 "raid_level": "raid1", 00:08:47.487 "superblock": true, 00:08:47.487 "num_base_bdevs": 3, 00:08:47.487 "num_base_bdevs_discovered": 2, 00:08:47.487 "num_base_bdevs_operational": 3, 00:08:47.487 "base_bdevs_list": [ 00:08:47.487 { 00:08:47.487 "name": "BaseBdev1", 00:08:47.487 "uuid": "0db2028d-6341-4b0b-b59e-2e9026ee1e3d", 00:08:47.487 "is_configured": true, 00:08:47.487 "data_offset": 2048, 00:08:47.487 "data_size": 63488 00:08:47.487 }, 00:08:47.487 { 00:08:47.487 "name": null, 00:08:47.487 "uuid": "08f7d2bd-6417-4f2c-956f-c4dae932a8b5", 00:08:47.487 "is_configured": false, 00:08:47.487 "data_offset": 0, 00:08:47.487 "data_size": 63488 00:08:47.487 }, 00:08:47.487 { 00:08:47.487 "name": "BaseBdev3", 00:08:47.487 "uuid": "3a490e98-91fa-4b6b-8c9e-7927241edfbf", 00:08:47.487 "is_configured": true, 00:08:47.487 "data_offset": 2048, 00:08:47.487 "data_size": 63488 00:08:47.487 } 00:08:47.487 ] 00:08:47.487 }' 00:08:47.487 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.487 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.745 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.745 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:47.745 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.745 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.745 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.745 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:47.745 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:47.745 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.745 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.745 [2024-12-14 04:57:58.610774] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:47.745 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.745 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:47.745 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.745 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.745 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.745 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.745 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.745 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.745 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.746 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.746 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.746 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.746 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.746 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.746 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.004 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.004 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.004 "name": "Existed_Raid", 00:08:48.004 "uuid": "dc2bd974-6425-42ec-ae83-7c2f133ac927", 00:08:48.004 "strip_size_kb": 0, 00:08:48.004 "state": "configuring", 00:08:48.004 "raid_level": "raid1", 00:08:48.004 "superblock": true, 00:08:48.004 "num_base_bdevs": 3, 00:08:48.004 "num_base_bdevs_discovered": 1, 00:08:48.004 "num_base_bdevs_operational": 3, 00:08:48.004 "base_bdevs_list": [ 00:08:48.004 { 00:08:48.004 "name": "BaseBdev1", 00:08:48.004 "uuid": "0db2028d-6341-4b0b-b59e-2e9026ee1e3d", 00:08:48.004 "is_configured": true, 00:08:48.004 "data_offset": 2048, 00:08:48.004 "data_size": 63488 00:08:48.004 }, 00:08:48.004 { 00:08:48.004 "name": null, 00:08:48.004 "uuid": "08f7d2bd-6417-4f2c-956f-c4dae932a8b5", 00:08:48.004 "is_configured": false, 00:08:48.004 "data_offset": 0, 00:08:48.004 "data_size": 63488 00:08:48.004 }, 00:08:48.004 { 00:08:48.004 "name": null, 00:08:48.004 "uuid": "3a490e98-91fa-4b6b-8c9e-7927241edfbf", 00:08:48.004 "is_configured": false, 00:08:48.004 "data_offset": 0, 00:08:48.004 "data_size": 63488 00:08:48.004 } 00:08:48.004 ] 00:08:48.004 }' 00:08:48.004 04:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.004 04:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.263 [2024-12-14 04:57:59.062023] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.263 "name": "Existed_Raid", 00:08:48.263 "uuid": "dc2bd974-6425-42ec-ae83-7c2f133ac927", 00:08:48.263 "strip_size_kb": 0, 00:08:48.263 "state": "configuring", 00:08:48.263 "raid_level": "raid1", 00:08:48.263 "superblock": true, 00:08:48.263 "num_base_bdevs": 3, 00:08:48.263 "num_base_bdevs_discovered": 2, 00:08:48.263 "num_base_bdevs_operational": 3, 00:08:48.263 "base_bdevs_list": [ 00:08:48.263 { 00:08:48.263 "name": "BaseBdev1", 00:08:48.263 "uuid": "0db2028d-6341-4b0b-b59e-2e9026ee1e3d", 00:08:48.263 "is_configured": true, 00:08:48.263 "data_offset": 2048, 00:08:48.263 "data_size": 63488 00:08:48.263 }, 00:08:48.263 { 00:08:48.263 "name": null, 00:08:48.263 "uuid": "08f7d2bd-6417-4f2c-956f-c4dae932a8b5", 00:08:48.263 "is_configured": false, 00:08:48.263 "data_offset": 0, 00:08:48.263 "data_size": 63488 00:08:48.263 }, 00:08:48.263 { 00:08:48.263 "name": "BaseBdev3", 00:08:48.263 "uuid": "3a490e98-91fa-4b6b-8c9e-7927241edfbf", 00:08:48.263 "is_configured": true, 00:08:48.263 "data_offset": 2048, 00:08:48.263 "data_size": 63488 00:08:48.263 } 00:08:48.263 ] 00:08:48.263 }' 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.263 04:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.831 [2024-12-14 04:57:59.517280] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.831 "name": "Existed_Raid", 00:08:48.831 "uuid": "dc2bd974-6425-42ec-ae83-7c2f133ac927", 00:08:48.831 "strip_size_kb": 0, 00:08:48.831 "state": "configuring", 00:08:48.831 "raid_level": "raid1", 00:08:48.831 "superblock": true, 00:08:48.831 "num_base_bdevs": 3, 00:08:48.831 "num_base_bdevs_discovered": 1, 00:08:48.831 "num_base_bdevs_operational": 3, 00:08:48.831 "base_bdevs_list": [ 00:08:48.831 { 00:08:48.831 "name": null, 00:08:48.831 "uuid": "0db2028d-6341-4b0b-b59e-2e9026ee1e3d", 00:08:48.831 "is_configured": false, 00:08:48.831 "data_offset": 0, 00:08:48.831 "data_size": 63488 00:08:48.831 }, 00:08:48.831 { 00:08:48.831 "name": null, 00:08:48.831 "uuid": "08f7d2bd-6417-4f2c-956f-c4dae932a8b5", 00:08:48.831 "is_configured": false, 00:08:48.831 "data_offset": 0, 00:08:48.831 "data_size": 63488 00:08:48.831 }, 00:08:48.831 { 00:08:48.831 "name": "BaseBdev3", 00:08:48.831 "uuid": "3a490e98-91fa-4b6b-8c9e-7927241edfbf", 00:08:48.831 "is_configured": true, 00:08:48.831 "data_offset": 2048, 00:08:48.831 "data_size": 63488 00:08:48.831 } 00:08:48.831 ] 00:08:48.831 }' 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.831 04:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.399 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.399 04:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:49.399 04:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.399 04:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.399 [2024-12-14 04:58:00.034898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.399 "name": "Existed_Raid", 00:08:49.399 "uuid": "dc2bd974-6425-42ec-ae83-7c2f133ac927", 00:08:49.399 "strip_size_kb": 0, 00:08:49.399 "state": "configuring", 00:08:49.399 "raid_level": "raid1", 00:08:49.399 "superblock": true, 00:08:49.399 "num_base_bdevs": 3, 00:08:49.399 "num_base_bdevs_discovered": 2, 00:08:49.399 "num_base_bdevs_operational": 3, 00:08:49.399 "base_bdevs_list": [ 00:08:49.399 { 00:08:49.399 "name": null, 00:08:49.399 "uuid": "0db2028d-6341-4b0b-b59e-2e9026ee1e3d", 00:08:49.399 "is_configured": false, 00:08:49.399 "data_offset": 0, 00:08:49.399 "data_size": 63488 00:08:49.399 }, 00:08:49.399 { 00:08:49.399 "name": "BaseBdev2", 00:08:49.399 "uuid": "08f7d2bd-6417-4f2c-956f-c4dae932a8b5", 00:08:49.399 "is_configured": true, 00:08:49.399 "data_offset": 2048, 00:08:49.399 "data_size": 63488 00:08:49.399 }, 00:08:49.399 { 00:08:49.399 "name": "BaseBdev3", 00:08:49.399 "uuid": "3a490e98-91fa-4b6b-8c9e-7927241edfbf", 00:08:49.399 "is_configured": true, 00:08:49.399 "data_offset": 2048, 00:08:49.399 "data_size": 63488 00:08:49.399 } 00:08:49.399 ] 00:08:49.399 }' 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.399 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.659 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.659 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:49.659 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.659 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.659 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.659 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:49.659 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.659 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:49.659 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.659 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.659 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.918 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0db2028d-6341-4b0b-b59e-2e9026ee1e3d 00:08:49.918 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.918 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.918 [2024-12-14 04:58:00.560848] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:49.918 [2024-12-14 04:58:00.561020] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:49.918 [2024-12-14 04:58:00.561033] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:49.918 NewBaseBdev 00:08:49.918 [2024-12-14 04:58:00.561309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:49.918 [2024-12-14 04:58:00.561455] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:49.918 [2024-12-14 04:58:00.561471] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:49.918 [2024-12-14 04:58:00.561598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.918 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.918 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:49.918 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:49.918 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:49.918 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:49.918 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:49.918 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:49.918 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:49.918 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.918 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.918 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.918 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:49.918 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.918 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.918 [ 00:08:49.918 { 00:08:49.918 "name": "NewBaseBdev", 00:08:49.918 "aliases": [ 00:08:49.918 "0db2028d-6341-4b0b-b59e-2e9026ee1e3d" 00:08:49.918 ], 00:08:49.918 "product_name": "Malloc disk", 00:08:49.918 "block_size": 512, 00:08:49.918 "num_blocks": 65536, 00:08:49.918 "uuid": "0db2028d-6341-4b0b-b59e-2e9026ee1e3d", 00:08:49.918 "assigned_rate_limits": { 00:08:49.918 "rw_ios_per_sec": 0, 00:08:49.918 "rw_mbytes_per_sec": 0, 00:08:49.918 "r_mbytes_per_sec": 0, 00:08:49.918 "w_mbytes_per_sec": 0 00:08:49.918 }, 00:08:49.918 "claimed": true, 00:08:49.918 "claim_type": "exclusive_write", 00:08:49.918 "zoned": false, 00:08:49.918 "supported_io_types": { 00:08:49.918 "read": true, 00:08:49.918 "write": true, 00:08:49.918 "unmap": true, 00:08:49.918 "flush": true, 00:08:49.918 "reset": true, 00:08:49.918 "nvme_admin": false, 00:08:49.918 "nvme_io": false, 00:08:49.918 "nvme_io_md": false, 00:08:49.918 "write_zeroes": true, 00:08:49.918 "zcopy": true, 00:08:49.918 "get_zone_info": false, 00:08:49.918 "zone_management": false, 00:08:49.918 "zone_append": false, 00:08:49.918 "compare": false, 00:08:49.918 "compare_and_write": false, 00:08:49.918 "abort": true, 00:08:49.918 "seek_hole": false, 00:08:49.918 "seek_data": false, 00:08:49.918 "copy": true, 00:08:49.918 "nvme_iov_md": false 00:08:49.918 }, 00:08:49.918 "memory_domains": [ 00:08:49.918 { 00:08:49.918 "dma_device_id": "system", 00:08:49.918 "dma_device_type": 1 00:08:49.918 }, 00:08:49.919 { 00:08:49.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.919 "dma_device_type": 2 00:08:49.919 } 00:08:49.919 ], 00:08:49.919 "driver_specific": {} 00:08:49.919 } 00:08:49.919 ] 00:08:49.919 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.919 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:49.919 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:49.919 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.919 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.919 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.919 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.919 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.919 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.919 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.919 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.919 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.919 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.919 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.919 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.919 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.919 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.919 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.919 "name": "Existed_Raid", 00:08:49.919 "uuid": "dc2bd974-6425-42ec-ae83-7c2f133ac927", 00:08:49.919 "strip_size_kb": 0, 00:08:49.919 "state": "online", 00:08:49.919 "raid_level": "raid1", 00:08:49.919 "superblock": true, 00:08:49.919 "num_base_bdevs": 3, 00:08:49.919 "num_base_bdevs_discovered": 3, 00:08:49.919 "num_base_bdevs_operational": 3, 00:08:49.919 "base_bdevs_list": [ 00:08:49.919 { 00:08:49.919 "name": "NewBaseBdev", 00:08:49.919 "uuid": "0db2028d-6341-4b0b-b59e-2e9026ee1e3d", 00:08:49.919 "is_configured": true, 00:08:49.919 "data_offset": 2048, 00:08:49.919 "data_size": 63488 00:08:49.919 }, 00:08:49.919 { 00:08:49.919 "name": "BaseBdev2", 00:08:49.919 "uuid": "08f7d2bd-6417-4f2c-956f-c4dae932a8b5", 00:08:49.919 "is_configured": true, 00:08:49.919 "data_offset": 2048, 00:08:49.919 "data_size": 63488 00:08:49.919 }, 00:08:49.919 { 00:08:49.919 "name": "BaseBdev3", 00:08:49.919 "uuid": "3a490e98-91fa-4b6b-8c9e-7927241edfbf", 00:08:49.919 "is_configured": true, 00:08:49.919 "data_offset": 2048, 00:08:49.919 "data_size": 63488 00:08:49.919 } 00:08:49.919 ] 00:08:49.919 }' 00:08:49.919 04:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.919 04:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.486 [2024-12-14 04:58:01.072284] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:50.486 "name": "Existed_Raid", 00:08:50.486 "aliases": [ 00:08:50.486 "dc2bd974-6425-42ec-ae83-7c2f133ac927" 00:08:50.486 ], 00:08:50.486 "product_name": "Raid Volume", 00:08:50.486 "block_size": 512, 00:08:50.486 "num_blocks": 63488, 00:08:50.486 "uuid": "dc2bd974-6425-42ec-ae83-7c2f133ac927", 00:08:50.486 "assigned_rate_limits": { 00:08:50.486 "rw_ios_per_sec": 0, 00:08:50.486 "rw_mbytes_per_sec": 0, 00:08:50.486 "r_mbytes_per_sec": 0, 00:08:50.486 "w_mbytes_per_sec": 0 00:08:50.486 }, 00:08:50.486 "claimed": false, 00:08:50.486 "zoned": false, 00:08:50.486 "supported_io_types": { 00:08:50.486 "read": true, 00:08:50.486 "write": true, 00:08:50.486 "unmap": false, 00:08:50.486 "flush": false, 00:08:50.486 "reset": true, 00:08:50.486 "nvme_admin": false, 00:08:50.486 "nvme_io": false, 00:08:50.486 "nvme_io_md": false, 00:08:50.486 "write_zeroes": true, 00:08:50.486 "zcopy": false, 00:08:50.486 "get_zone_info": false, 00:08:50.486 "zone_management": false, 00:08:50.486 "zone_append": false, 00:08:50.486 "compare": false, 00:08:50.486 "compare_and_write": false, 00:08:50.486 "abort": false, 00:08:50.486 "seek_hole": false, 00:08:50.486 "seek_data": false, 00:08:50.486 "copy": false, 00:08:50.486 "nvme_iov_md": false 00:08:50.486 }, 00:08:50.486 "memory_domains": [ 00:08:50.486 { 00:08:50.486 "dma_device_id": "system", 00:08:50.486 "dma_device_type": 1 00:08:50.486 }, 00:08:50.486 { 00:08:50.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.486 "dma_device_type": 2 00:08:50.486 }, 00:08:50.486 { 00:08:50.486 "dma_device_id": "system", 00:08:50.486 "dma_device_type": 1 00:08:50.486 }, 00:08:50.486 { 00:08:50.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.486 "dma_device_type": 2 00:08:50.486 }, 00:08:50.486 { 00:08:50.486 "dma_device_id": "system", 00:08:50.486 "dma_device_type": 1 00:08:50.486 }, 00:08:50.486 { 00:08:50.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.486 "dma_device_type": 2 00:08:50.486 } 00:08:50.486 ], 00:08:50.486 "driver_specific": { 00:08:50.486 "raid": { 00:08:50.486 "uuid": "dc2bd974-6425-42ec-ae83-7c2f133ac927", 00:08:50.486 "strip_size_kb": 0, 00:08:50.486 "state": "online", 00:08:50.486 "raid_level": "raid1", 00:08:50.486 "superblock": true, 00:08:50.486 "num_base_bdevs": 3, 00:08:50.486 "num_base_bdevs_discovered": 3, 00:08:50.486 "num_base_bdevs_operational": 3, 00:08:50.486 "base_bdevs_list": [ 00:08:50.486 { 00:08:50.486 "name": "NewBaseBdev", 00:08:50.486 "uuid": "0db2028d-6341-4b0b-b59e-2e9026ee1e3d", 00:08:50.486 "is_configured": true, 00:08:50.486 "data_offset": 2048, 00:08:50.486 "data_size": 63488 00:08:50.486 }, 00:08:50.486 { 00:08:50.486 "name": "BaseBdev2", 00:08:50.486 "uuid": "08f7d2bd-6417-4f2c-956f-c4dae932a8b5", 00:08:50.486 "is_configured": true, 00:08:50.486 "data_offset": 2048, 00:08:50.486 "data_size": 63488 00:08:50.486 }, 00:08:50.486 { 00:08:50.486 "name": "BaseBdev3", 00:08:50.486 "uuid": "3a490e98-91fa-4b6b-8c9e-7927241edfbf", 00:08:50.486 "is_configured": true, 00:08:50.486 "data_offset": 2048, 00:08:50.486 "data_size": 63488 00:08:50.486 } 00:08:50.486 ] 00:08:50.486 } 00:08:50.486 } 00:08:50.486 }' 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:50.486 BaseBdev2 00:08:50.486 BaseBdev3' 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.486 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.487 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.487 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.487 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.487 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.487 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:50.487 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.487 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.487 [2024-12-14 04:58:01.323587] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:50.487 [2024-12-14 04:58:01.323660] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.487 [2024-12-14 04:58:01.323764] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.487 [2024-12-14 04:58:01.324058] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.487 [2024-12-14 04:58:01.324125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:50.487 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.487 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 79075 00:08:50.487 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 79075 ']' 00:08:50.487 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 79075 00:08:50.487 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:50.487 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.487 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79075 00:08:50.487 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:50.487 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:50.487 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79075' 00:08:50.487 killing process with pid 79075 00:08:50.487 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 79075 00:08:50.487 [2024-12-14 04:58:01.359486] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.487 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 79075 00:08:50.746 [2024-12-14 04:58:01.391259] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:50.746 04:58:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:50.746 00:08:50.746 real 0m8.673s 00:08:50.746 user 0m14.830s 00:08:50.746 sys 0m1.708s 00:08:51.005 ************************************ 00:08:51.005 END TEST raid_state_function_test_sb 00:08:51.005 ************************************ 00:08:51.005 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.005 04:58:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.005 04:58:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:08:51.005 04:58:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:51.005 04:58:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.005 04:58:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:51.005 ************************************ 00:08:51.005 START TEST raid_superblock_test 00:08:51.005 ************************************ 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79674 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79674 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 79674 ']' 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.005 04:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.005 [2024-12-14 04:58:01.785833] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:51.005 [2024-12-14 04:58:01.785951] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79674 ] 00:08:51.264 [2024-12-14 04:58:01.942881] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.264 [2024-12-14 04:58:01.988758] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.264 [2024-12-14 04:58:02.030539] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.264 [2024-12-14 04:58:02.030575] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.833 malloc1 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.833 [2024-12-14 04:58:02.620751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:51.833 [2024-12-14 04:58:02.620903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.833 [2024-12-14 04:58:02.620946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:51.833 [2024-12-14 04:58:02.620993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.833 [2024-12-14 04:58:02.623069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.833 [2024-12-14 04:58:02.623158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:51.833 pt1 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.833 malloc2 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.833 [2024-12-14 04:58:02.659456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:51.833 [2024-12-14 04:58:02.659578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.833 [2024-12-14 04:58:02.659622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:51.833 [2024-12-14 04:58:02.659675] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.833 [2024-12-14 04:58:02.662223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.833 [2024-12-14 04:58:02.662306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:51.833 pt2 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.833 malloc3 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.833 [2024-12-14 04:58:02.687849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:51.833 [2024-12-14 04:58:02.687952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.833 [2024-12-14 04:58:02.687988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:51.833 [2024-12-14 04:58:02.688018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.833 [2024-12-14 04:58:02.690069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.833 [2024-12-14 04:58:02.690155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:51.833 pt3 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.833 [2024-12-14 04:58:02.699878] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:51.833 [2024-12-14 04:58:02.701727] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:51.833 [2024-12-14 04:58:02.701834] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:51.833 [2024-12-14 04:58:02.702024] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:51.833 [2024-12-14 04:58:02.702078] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:51.833 [2024-12-14 04:58:02.702363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:51.833 [2024-12-14 04:58:02.702555] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:51.833 [2024-12-14 04:58:02.702614] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:51.833 [2024-12-14 04:58:02.702807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.833 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.834 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.834 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.834 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.834 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.834 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.834 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.834 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.093 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.093 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.093 "name": "raid_bdev1", 00:08:52.093 "uuid": "4d3155d7-da42-441e-9efe-ac43dd0e1e50", 00:08:52.093 "strip_size_kb": 0, 00:08:52.093 "state": "online", 00:08:52.093 "raid_level": "raid1", 00:08:52.093 "superblock": true, 00:08:52.093 "num_base_bdevs": 3, 00:08:52.093 "num_base_bdevs_discovered": 3, 00:08:52.093 "num_base_bdevs_operational": 3, 00:08:52.093 "base_bdevs_list": [ 00:08:52.093 { 00:08:52.093 "name": "pt1", 00:08:52.093 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.093 "is_configured": true, 00:08:52.093 "data_offset": 2048, 00:08:52.093 "data_size": 63488 00:08:52.093 }, 00:08:52.093 { 00:08:52.093 "name": "pt2", 00:08:52.093 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.093 "is_configured": true, 00:08:52.093 "data_offset": 2048, 00:08:52.093 "data_size": 63488 00:08:52.093 }, 00:08:52.093 { 00:08:52.093 "name": "pt3", 00:08:52.093 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:52.093 "is_configured": true, 00:08:52.093 "data_offset": 2048, 00:08:52.093 "data_size": 63488 00:08:52.093 } 00:08:52.093 ] 00:08:52.093 }' 00:08:52.093 04:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.093 04:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.352 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:52.352 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:52.352 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:52.352 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:52.352 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:52.352 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:52.352 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:52.352 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:52.352 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.352 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.352 [2024-12-14 04:58:03.147393] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.352 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.352 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:52.352 "name": "raid_bdev1", 00:08:52.352 "aliases": [ 00:08:52.352 "4d3155d7-da42-441e-9efe-ac43dd0e1e50" 00:08:52.352 ], 00:08:52.352 "product_name": "Raid Volume", 00:08:52.352 "block_size": 512, 00:08:52.352 "num_blocks": 63488, 00:08:52.352 "uuid": "4d3155d7-da42-441e-9efe-ac43dd0e1e50", 00:08:52.352 "assigned_rate_limits": { 00:08:52.352 "rw_ios_per_sec": 0, 00:08:52.352 "rw_mbytes_per_sec": 0, 00:08:52.352 "r_mbytes_per_sec": 0, 00:08:52.352 "w_mbytes_per_sec": 0 00:08:52.352 }, 00:08:52.352 "claimed": false, 00:08:52.352 "zoned": false, 00:08:52.352 "supported_io_types": { 00:08:52.352 "read": true, 00:08:52.352 "write": true, 00:08:52.352 "unmap": false, 00:08:52.352 "flush": false, 00:08:52.352 "reset": true, 00:08:52.352 "nvme_admin": false, 00:08:52.352 "nvme_io": false, 00:08:52.352 "nvme_io_md": false, 00:08:52.352 "write_zeroes": true, 00:08:52.352 "zcopy": false, 00:08:52.352 "get_zone_info": false, 00:08:52.352 "zone_management": false, 00:08:52.352 "zone_append": false, 00:08:52.352 "compare": false, 00:08:52.352 "compare_and_write": false, 00:08:52.352 "abort": false, 00:08:52.352 "seek_hole": false, 00:08:52.352 "seek_data": false, 00:08:52.352 "copy": false, 00:08:52.352 "nvme_iov_md": false 00:08:52.352 }, 00:08:52.352 "memory_domains": [ 00:08:52.352 { 00:08:52.352 "dma_device_id": "system", 00:08:52.352 "dma_device_type": 1 00:08:52.352 }, 00:08:52.352 { 00:08:52.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.352 "dma_device_type": 2 00:08:52.352 }, 00:08:52.352 { 00:08:52.352 "dma_device_id": "system", 00:08:52.352 "dma_device_type": 1 00:08:52.352 }, 00:08:52.352 { 00:08:52.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.352 "dma_device_type": 2 00:08:52.352 }, 00:08:52.352 { 00:08:52.352 "dma_device_id": "system", 00:08:52.352 "dma_device_type": 1 00:08:52.352 }, 00:08:52.352 { 00:08:52.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.352 "dma_device_type": 2 00:08:52.352 } 00:08:52.352 ], 00:08:52.352 "driver_specific": { 00:08:52.352 "raid": { 00:08:52.352 "uuid": "4d3155d7-da42-441e-9efe-ac43dd0e1e50", 00:08:52.352 "strip_size_kb": 0, 00:08:52.352 "state": "online", 00:08:52.352 "raid_level": "raid1", 00:08:52.352 "superblock": true, 00:08:52.352 "num_base_bdevs": 3, 00:08:52.352 "num_base_bdevs_discovered": 3, 00:08:52.352 "num_base_bdevs_operational": 3, 00:08:52.352 "base_bdevs_list": [ 00:08:52.352 { 00:08:52.352 "name": "pt1", 00:08:52.352 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.352 "is_configured": true, 00:08:52.352 "data_offset": 2048, 00:08:52.352 "data_size": 63488 00:08:52.352 }, 00:08:52.352 { 00:08:52.352 "name": "pt2", 00:08:52.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.352 "is_configured": true, 00:08:52.352 "data_offset": 2048, 00:08:52.352 "data_size": 63488 00:08:52.352 }, 00:08:52.352 { 00:08:52.352 "name": "pt3", 00:08:52.352 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:52.352 "is_configured": true, 00:08:52.352 "data_offset": 2048, 00:08:52.352 "data_size": 63488 00:08:52.352 } 00:08:52.352 ] 00:08:52.352 } 00:08:52.352 } 00:08:52.352 }' 00:08:52.353 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:52.353 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:52.353 pt2 00:08:52.353 pt3' 00:08:52.353 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.353 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:52.353 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.612 [2024-12-14 04:58:03.390895] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4d3155d7-da42-441e-9efe-ac43dd0e1e50 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4d3155d7-da42-441e-9efe-ac43dd0e1e50 ']' 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.612 [2024-12-14 04:58:03.442563] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:52.612 [2024-12-14 04:58:03.442586] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:52.612 [2024-12-14 04:58:03.442662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.612 [2024-12-14 04:58:03.442732] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:52.612 [2024-12-14 04:58:03.442743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:52.612 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.872 [2024-12-14 04:58:03.598307] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:52.872 [2024-12-14 04:58:03.600107] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:52.872 [2024-12-14 04:58:03.600152] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:52.872 [2024-12-14 04:58:03.600212] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:52.872 [2024-12-14 04:58:03.600278] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:52.872 [2024-12-14 04:58:03.600301] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:52.872 [2024-12-14 04:58:03.600314] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:52.872 [2024-12-14 04:58:03.600323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:52.872 request: 00:08:52.872 { 00:08:52.872 "name": "raid_bdev1", 00:08:52.872 "raid_level": "raid1", 00:08:52.872 "base_bdevs": [ 00:08:52.872 "malloc1", 00:08:52.872 "malloc2", 00:08:52.872 "malloc3" 00:08:52.872 ], 00:08:52.872 "superblock": false, 00:08:52.872 "method": "bdev_raid_create", 00:08:52.872 "req_id": 1 00:08:52.872 } 00:08:52.872 Got JSON-RPC error response 00:08:52.872 response: 00:08:52.872 { 00:08:52.872 "code": -17, 00:08:52.872 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:52.872 } 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.872 [2024-12-14 04:58:03.658197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:52.872 [2024-12-14 04:58:03.658297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.872 [2024-12-14 04:58:03.658362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:52.872 [2024-12-14 04:58:03.658405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.872 [2024-12-14 04:58:03.660445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.872 [2024-12-14 04:58:03.660520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:52.872 [2024-12-14 04:58:03.660641] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:52.872 [2024-12-14 04:58:03.660732] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:52.872 pt1 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.872 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.872 "name": "raid_bdev1", 00:08:52.872 "uuid": "4d3155d7-da42-441e-9efe-ac43dd0e1e50", 00:08:52.872 "strip_size_kb": 0, 00:08:52.872 "state": "configuring", 00:08:52.872 "raid_level": "raid1", 00:08:52.872 "superblock": true, 00:08:52.872 "num_base_bdevs": 3, 00:08:52.872 "num_base_bdevs_discovered": 1, 00:08:52.872 "num_base_bdevs_operational": 3, 00:08:52.872 "base_bdevs_list": [ 00:08:52.872 { 00:08:52.872 "name": "pt1", 00:08:52.872 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.872 "is_configured": true, 00:08:52.872 "data_offset": 2048, 00:08:52.872 "data_size": 63488 00:08:52.872 }, 00:08:52.872 { 00:08:52.872 "name": null, 00:08:52.872 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.872 "is_configured": false, 00:08:52.872 "data_offset": 2048, 00:08:52.872 "data_size": 63488 00:08:52.872 }, 00:08:52.872 { 00:08:52.872 "name": null, 00:08:52.872 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:52.872 "is_configured": false, 00:08:52.872 "data_offset": 2048, 00:08:52.872 "data_size": 63488 00:08:52.872 } 00:08:52.872 ] 00:08:52.872 }' 00:08:52.873 04:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.873 04:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.440 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:53.440 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:53.440 04:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.440 04:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.440 [2024-12-14 04:58:04.053516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:53.440 [2024-12-14 04:58:04.053576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.440 [2024-12-14 04:58:04.053598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:53.440 [2024-12-14 04:58:04.053611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.440 [2024-12-14 04:58:04.053969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.440 [2024-12-14 04:58:04.053989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:53.440 [2024-12-14 04:58:04.054052] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:53.440 [2024-12-14 04:58:04.054075] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:53.440 pt2 00:08:53.440 04:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.440 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:53.440 04:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.440 04:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.440 [2024-12-14 04:58:04.065512] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:53.440 04:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.440 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:08:53.440 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.440 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.440 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.440 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.441 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.441 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.441 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.441 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.441 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.441 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.441 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.441 04:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.441 04:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.441 04:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.441 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.441 "name": "raid_bdev1", 00:08:53.441 "uuid": "4d3155d7-da42-441e-9efe-ac43dd0e1e50", 00:08:53.441 "strip_size_kb": 0, 00:08:53.441 "state": "configuring", 00:08:53.441 "raid_level": "raid1", 00:08:53.441 "superblock": true, 00:08:53.441 "num_base_bdevs": 3, 00:08:53.441 "num_base_bdevs_discovered": 1, 00:08:53.441 "num_base_bdevs_operational": 3, 00:08:53.441 "base_bdevs_list": [ 00:08:53.441 { 00:08:53.441 "name": "pt1", 00:08:53.441 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:53.441 "is_configured": true, 00:08:53.441 "data_offset": 2048, 00:08:53.441 "data_size": 63488 00:08:53.441 }, 00:08:53.441 { 00:08:53.441 "name": null, 00:08:53.441 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.441 "is_configured": false, 00:08:53.441 "data_offset": 0, 00:08:53.441 "data_size": 63488 00:08:53.441 }, 00:08:53.441 { 00:08:53.441 "name": null, 00:08:53.441 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:53.441 "is_configured": false, 00:08:53.441 "data_offset": 2048, 00:08:53.441 "data_size": 63488 00:08:53.441 } 00:08:53.441 ] 00:08:53.441 }' 00:08:53.441 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.441 04:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.700 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:53.700 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:53.700 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:53.700 04:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.700 04:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.700 [2024-12-14 04:58:04.516765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:53.700 [2024-12-14 04:58:04.516903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.700 [2024-12-14 04:58:04.516942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:53.700 [2024-12-14 04:58:04.516970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.700 [2024-12-14 04:58:04.517424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.700 [2024-12-14 04:58:04.517486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:53.700 [2024-12-14 04:58:04.517613] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:53.700 [2024-12-14 04:58:04.517683] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:53.700 pt2 00:08:53.700 04:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.700 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:53.700 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:53.700 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:53.700 04:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.700 04:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.700 [2024-12-14 04:58:04.528693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:53.700 [2024-12-14 04:58:04.528770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.700 [2024-12-14 04:58:04.528821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:53.700 [2024-12-14 04:58:04.528846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.700 [2024-12-14 04:58:04.529230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.700 [2024-12-14 04:58:04.529286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:53.700 [2024-12-14 04:58:04.529392] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:53.700 [2024-12-14 04:58:04.529446] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:53.700 [2024-12-14 04:58:04.529603] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:53.700 [2024-12-14 04:58:04.529650] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:53.700 [2024-12-14 04:58:04.529879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:53.700 [2024-12-14 04:58:04.529996] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:53.700 [2024-12-14 04:58:04.530009] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:53.700 [2024-12-14 04:58:04.530107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.700 pt3 00:08:53.701 04:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.701 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:53.701 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:53.701 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:53.701 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.701 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.701 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.701 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.701 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.701 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.701 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.701 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.701 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.701 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.701 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.701 04:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.701 04:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.701 04:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.960 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.960 "name": "raid_bdev1", 00:08:53.960 "uuid": "4d3155d7-da42-441e-9efe-ac43dd0e1e50", 00:08:53.960 "strip_size_kb": 0, 00:08:53.960 "state": "online", 00:08:53.960 "raid_level": "raid1", 00:08:53.960 "superblock": true, 00:08:53.960 "num_base_bdevs": 3, 00:08:53.960 "num_base_bdevs_discovered": 3, 00:08:53.960 "num_base_bdevs_operational": 3, 00:08:53.960 "base_bdevs_list": [ 00:08:53.960 { 00:08:53.960 "name": "pt1", 00:08:53.960 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:53.960 "is_configured": true, 00:08:53.960 "data_offset": 2048, 00:08:53.960 "data_size": 63488 00:08:53.960 }, 00:08:53.960 { 00:08:53.960 "name": "pt2", 00:08:53.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.960 "is_configured": true, 00:08:53.960 "data_offset": 2048, 00:08:53.960 "data_size": 63488 00:08:53.960 }, 00:08:53.960 { 00:08:53.960 "name": "pt3", 00:08:53.960 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:53.960 "is_configured": true, 00:08:53.960 "data_offset": 2048, 00:08:53.960 "data_size": 63488 00:08:53.960 } 00:08:53.960 ] 00:08:53.960 }' 00:08:53.960 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.960 04:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.219 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:54.219 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:54.219 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:54.219 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:54.219 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:54.219 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:54.219 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:54.219 04:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:54.219 04:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.219 04:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.219 [2024-12-14 04:58:05.000208] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.219 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.219 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:54.219 "name": "raid_bdev1", 00:08:54.219 "aliases": [ 00:08:54.219 "4d3155d7-da42-441e-9efe-ac43dd0e1e50" 00:08:54.219 ], 00:08:54.219 "product_name": "Raid Volume", 00:08:54.219 "block_size": 512, 00:08:54.219 "num_blocks": 63488, 00:08:54.219 "uuid": "4d3155d7-da42-441e-9efe-ac43dd0e1e50", 00:08:54.219 "assigned_rate_limits": { 00:08:54.219 "rw_ios_per_sec": 0, 00:08:54.219 "rw_mbytes_per_sec": 0, 00:08:54.219 "r_mbytes_per_sec": 0, 00:08:54.219 "w_mbytes_per_sec": 0 00:08:54.219 }, 00:08:54.219 "claimed": false, 00:08:54.219 "zoned": false, 00:08:54.219 "supported_io_types": { 00:08:54.219 "read": true, 00:08:54.219 "write": true, 00:08:54.219 "unmap": false, 00:08:54.219 "flush": false, 00:08:54.219 "reset": true, 00:08:54.219 "nvme_admin": false, 00:08:54.219 "nvme_io": false, 00:08:54.219 "nvme_io_md": false, 00:08:54.219 "write_zeroes": true, 00:08:54.219 "zcopy": false, 00:08:54.219 "get_zone_info": false, 00:08:54.219 "zone_management": false, 00:08:54.219 "zone_append": false, 00:08:54.220 "compare": false, 00:08:54.220 "compare_and_write": false, 00:08:54.220 "abort": false, 00:08:54.220 "seek_hole": false, 00:08:54.220 "seek_data": false, 00:08:54.220 "copy": false, 00:08:54.220 "nvme_iov_md": false 00:08:54.220 }, 00:08:54.220 "memory_domains": [ 00:08:54.220 { 00:08:54.220 "dma_device_id": "system", 00:08:54.220 "dma_device_type": 1 00:08:54.220 }, 00:08:54.220 { 00:08:54.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.220 "dma_device_type": 2 00:08:54.220 }, 00:08:54.220 { 00:08:54.220 "dma_device_id": "system", 00:08:54.220 "dma_device_type": 1 00:08:54.220 }, 00:08:54.220 { 00:08:54.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.220 "dma_device_type": 2 00:08:54.220 }, 00:08:54.220 { 00:08:54.220 "dma_device_id": "system", 00:08:54.220 "dma_device_type": 1 00:08:54.220 }, 00:08:54.220 { 00:08:54.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.220 "dma_device_type": 2 00:08:54.220 } 00:08:54.220 ], 00:08:54.220 "driver_specific": { 00:08:54.220 "raid": { 00:08:54.220 "uuid": "4d3155d7-da42-441e-9efe-ac43dd0e1e50", 00:08:54.220 "strip_size_kb": 0, 00:08:54.220 "state": "online", 00:08:54.220 "raid_level": "raid1", 00:08:54.220 "superblock": true, 00:08:54.220 "num_base_bdevs": 3, 00:08:54.220 "num_base_bdevs_discovered": 3, 00:08:54.220 "num_base_bdevs_operational": 3, 00:08:54.220 "base_bdevs_list": [ 00:08:54.220 { 00:08:54.220 "name": "pt1", 00:08:54.220 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:54.220 "is_configured": true, 00:08:54.220 "data_offset": 2048, 00:08:54.220 "data_size": 63488 00:08:54.220 }, 00:08:54.220 { 00:08:54.220 "name": "pt2", 00:08:54.220 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:54.220 "is_configured": true, 00:08:54.220 "data_offset": 2048, 00:08:54.220 "data_size": 63488 00:08:54.220 }, 00:08:54.220 { 00:08:54.220 "name": "pt3", 00:08:54.220 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:54.220 "is_configured": true, 00:08:54.220 "data_offset": 2048, 00:08:54.220 "data_size": 63488 00:08:54.220 } 00:08:54.220 ] 00:08:54.220 } 00:08:54.220 } 00:08:54.220 }' 00:08:54.220 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:54.220 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:54.220 pt2 00:08:54.220 pt3' 00:08:54.220 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.220 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:54.220 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.220 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.220 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:54.220 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.220 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.479 [2024-12-14 04:58:05.231729] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4d3155d7-da42-441e-9efe-ac43dd0e1e50 '!=' 4d3155d7-da42-441e-9efe-ac43dd0e1e50 ']' 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.479 [2024-12-14 04:58:05.263451] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.479 "name": "raid_bdev1", 00:08:54.479 "uuid": "4d3155d7-da42-441e-9efe-ac43dd0e1e50", 00:08:54.479 "strip_size_kb": 0, 00:08:54.479 "state": "online", 00:08:54.479 "raid_level": "raid1", 00:08:54.479 "superblock": true, 00:08:54.479 "num_base_bdevs": 3, 00:08:54.479 "num_base_bdevs_discovered": 2, 00:08:54.479 "num_base_bdevs_operational": 2, 00:08:54.479 "base_bdevs_list": [ 00:08:54.479 { 00:08:54.479 "name": null, 00:08:54.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.479 "is_configured": false, 00:08:54.479 "data_offset": 0, 00:08:54.479 "data_size": 63488 00:08:54.479 }, 00:08:54.479 { 00:08:54.479 "name": "pt2", 00:08:54.479 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:54.479 "is_configured": true, 00:08:54.479 "data_offset": 2048, 00:08:54.479 "data_size": 63488 00:08:54.479 }, 00:08:54.479 { 00:08:54.479 "name": "pt3", 00:08:54.479 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:54.479 "is_configured": true, 00:08:54.479 "data_offset": 2048, 00:08:54.479 "data_size": 63488 00:08:54.479 } 00:08:54.479 ] 00:08:54.479 }' 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.479 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.053 [2024-12-14 04:58:05.682722] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.053 [2024-12-14 04:58:05.682753] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.053 [2024-12-14 04:58:05.682820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.053 [2024-12-14 04:58:05.682880] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.053 [2024-12-14 04:58:05.682889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.053 [2024-12-14 04:58:05.770573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:55.053 [2024-12-14 04:58:05.770666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.053 [2024-12-14 04:58:05.770699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:08:55.053 [2024-12-14 04:58:05.770710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.053 [2024-12-14 04:58:05.772812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.053 [2024-12-14 04:58:05.772887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:55.053 [2024-12-14 04:58:05.772976] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:55.053 [2024-12-14 04:58:05.773011] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:55.053 pt2 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.053 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.054 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.054 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.054 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.054 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.054 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.054 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.054 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.054 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.054 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.054 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.054 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.054 "name": "raid_bdev1", 00:08:55.054 "uuid": "4d3155d7-da42-441e-9efe-ac43dd0e1e50", 00:08:55.054 "strip_size_kb": 0, 00:08:55.054 "state": "configuring", 00:08:55.054 "raid_level": "raid1", 00:08:55.054 "superblock": true, 00:08:55.054 "num_base_bdevs": 3, 00:08:55.054 "num_base_bdevs_discovered": 1, 00:08:55.054 "num_base_bdevs_operational": 2, 00:08:55.054 "base_bdevs_list": [ 00:08:55.054 { 00:08:55.054 "name": null, 00:08:55.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.054 "is_configured": false, 00:08:55.054 "data_offset": 2048, 00:08:55.054 "data_size": 63488 00:08:55.054 }, 00:08:55.054 { 00:08:55.054 "name": "pt2", 00:08:55.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.054 "is_configured": true, 00:08:55.054 "data_offset": 2048, 00:08:55.054 "data_size": 63488 00:08:55.054 }, 00:08:55.054 { 00:08:55.054 "name": null, 00:08:55.054 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:55.054 "is_configured": false, 00:08:55.054 "data_offset": 2048, 00:08:55.054 "data_size": 63488 00:08:55.054 } 00:08:55.054 ] 00:08:55.054 }' 00:08:55.054 04:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.054 04:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.335 [2024-12-14 04:58:06.137995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:55.335 [2024-12-14 04:58:06.138098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.335 [2024-12-14 04:58:06.138142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:08:55.335 [2024-12-14 04:58:06.138186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.335 [2024-12-14 04:58:06.138606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.335 [2024-12-14 04:58:06.138664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:55.335 [2024-12-14 04:58:06.138779] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:55.335 [2024-12-14 04:58:06.138834] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:55.335 [2024-12-14 04:58:06.138969] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:55.335 [2024-12-14 04:58:06.139011] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:55.335 [2024-12-14 04:58:06.139320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:55.335 [2024-12-14 04:58:06.139493] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:55.335 [2024-12-14 04:58:06.139542] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:55.335 [2024-12-14 04:58:06.139724] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.335 pt3 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.335 "name": "raid_bdev1", 00:08:55.335 "uuid": "4d3155d7-da42-441e-9efe-ac43dd0e1e50", 00:08:55.335 "strip_size_kb": 0, 00:08:55.335 "state": "online", 00:08:55.335 "raid_level": "raid1", 00:08:55.335 "superblock": true, 00:08:55.335 "num_base_bdevs": 3, 00:08:55.335 "num_base_bdevs_discovered": 2, 00:08:55.335 "num_base_bdevs_operational": 2, 00:08:55.335 "base_bdevs_list": [ 00:08:55.335 { 00:08:55.335 "name": null, 00:08:55.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.335 "is_configured": false, 00:08:55.335 "data_offset": 2048, 00:08:55.335 "data_size": 63488 00:08:55.335 }, 00:08:55.335 { 00:08:55.335 "name": "pt2", 00:08:55.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.335 "is_configured": true, 00:08:55.335 "data_offset": 2048, 00:08:55.335 "data_size": 63488 00:08:55.335 }, 00:08:55.335 { 00:08:55.335 "name": "pt3", 00:08:55.335 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:55.335 "is_configured": true, 00:08:55.335 "data_offset": 2048, 00:08:55.335 "data_size": 63488 00:08:55.335 } 00:08:55.335 ] 00:08:55.335 }' 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.335 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.914 [2024-12-14 04:58:06.557253] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.914 [2024-12-14 04:58:06.557279] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.914 [2024-12-14 04:58:06.557347] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.914 [2024-12-14 04:58:06.557402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.914 [2024-12-14 04:58:06.557413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.914 [2024-12-14 04:58:06.613127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:55.914 [2024-12-14 04:58:06.613246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.914 [2024-12-14 04:58:06.613297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:08:55.914 [2024-12-14 04:58:06.613336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.914 [2024-12-14 04:58:06.615470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.914 [2024-12-14 04:58:06.615542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:55.914 [2024-12-14 04:58:06.615649] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:55.914 [2024-12-14 04:58:06.615719] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:55.914 [2024-12-14 04:58:06.615853] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:55.914 [2024-12-14 04:58:06.615921] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.914 [2024-12-14 04:58:06.616001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:08:55.914 [2024-12-14 04:58:06.616090] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:55.914 pt1 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.914 "name": "raid_bdev1", 00:08:55.914 "uuid": "4d3155d7-da42-441e-9efe-ac43dd0e1e50", 00:08:55.914 "strip_size_kb": 0, 00:08:55.914 "state": "configuring", 00:08:55.914 "raid_level": "raid1", 00:08:55.914 "superblock": true, 00:08:55.914 "num_base_bdevs": 3, 00:08:55.914 "num_base_bdevs_discovered": 1, 00:08:55.914 "num_base_bdevs_operational": 2, 00:08:55.914 "base_bdevs_list": [ 00:08:55.914 { 00:08:55.914 "name": null, 00:08:55.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.914 "is_configured": false, 00:08:55.914 "data_offset": 2048, 00:08:55.914 "data_size": 63488 00:08:55.914 }, 00:08:55.914 { 00:08:55.914 "name": "pt2", 00:08:55.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.914 "is_configured": true, 00:08:55.914 "data_offset": 2048, 00:08:55.914 "data_size": 63488 00:08:55.914 }, 00:08:55.914 { 00:08:55.914 "name": null, 00:08:55.914 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:55.914 "is_configured": false, 00:08:55.914 "data_offset": 2048, 00:08:55.914 "data_size": 63488 00:08:55.914 } 00:08:55.914 ] 00:08:55.914 }' 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.914 04:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.483 [2024-12-14 04:58:07.116279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:56.483 [2024-12-14 04:58:07.116344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.483 [2024-12-14 04:58:07.116366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:08:56.483 [2024-12-14 04:58:07.116377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.483 [2024-12-14 04:58:07.116770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.483 [2024-12-14 04:58:07.116793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:56.483 [2024-12-14 04:58:07.116868] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:56.483 [2024-12-14 04:58:07.116914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:56.483 [2024-12-14 04:58:07.117016] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:56.483 [2024-12-14 04:58:07.117026] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:56.483 [2024-12-14 04:58:07.117246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:56.483 [2024-12-14 04:58:07.117373] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:56.483 [2024-12-14 04:58:07.117382] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:08:56.483 [2024-12-14 04:58:07.117485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.483 pt3 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.483 "name": "raid_bdev1", 00:08:56.483 "uuid": "4d3155d7-da42-441e-9efe-ac43dd0e1e50", 00:08:56.483 "strip_size_kb": 0, 00:08:56.483 "state": "online", 00:08:56.483 "raid_level": "raid1", 00:08:56.483 "superblock": true, 00:08:56.483 "num_base_bdevs": 3, 00:08:56.483 "num_base_bdevs_discovered": 2, 00:08:56.483 "num_base_bdevs_operational": 2, 00:08:56.483 "base_bdevs_list": [ 00:08:56.483 { 00:08:56.483 "name": null, 00:08:56.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.483 "is_configured": false, 00:08:56.483 "data_offset": 2048, 00:08:56.483 "data_size": 63488 00:08:56.483 }, 00:08:56.483 { 00:08:56.483 "name": "pt2", 00:08:56.483 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:56.483 "is_configured": true, 00:08:56.483 "data_offset": 2048, 00:08:56.483 "data_size": 63488 00:08:56.483 }, 00:08:56.483 { 00:08:56.483 "name": "pt3", 00:08:56.483 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:56.483 "is_configured": true, 00:08:56.483 "data_offset": 2048, 00:08:56.483 "data_size": 63488 00:08:56.483 } 00:08:56.483 ] 00:08:56.483 }' 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.483 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.743 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:56.743 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.743 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.743 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:56.743 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.743 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:56.743 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:56.743 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.743 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.743 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:56.743 [2024-12-14 04:58:07.595712] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.743 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.002 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4d3155d7-da42-441e-9efe-ac43dd0e1e50 '!=' 4d3155d7-da42-441e-9efe-ac43dd0e1e50 ']' 00:08:57.002 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79674 00:08:57.002 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 79674 ']' 00:08:57.002 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 79674 00:08:57.002 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:57.002 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:57.002 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79674 00:08:57.002 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:57.002 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:57.002 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79674' 00:08:57.002 killing process with pid 79674 00:08:57.002 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 79674 00:08:57.002 [2024-12-14 04:58:07.681725] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:57.002 [2024-12-14 04:58:07.681858] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.002 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 79674 00:08:57.002 [2024-12-14 04:58:07.681961] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:57.002 [2024-12-14 04:58:07.681974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:08:57.003 [2024-12-14 04:58:07.715992] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:57.263 04:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:57.263 00:08:57.263 real 0m6.261s 00:08:57.263 user 0m10.464s 00:08:57.263 sys 0m1.277s 00:08:57.263 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.263 04:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.263 ************************************ 00:08:57.263 END TEST raid_superblock_test 00:08:57.263 ************************************ 00:08:57.263 04:58:08 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:08:57.263 04:58:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:57.263 04:58:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.263 04:58:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:57.263 ************************************ 00:08:57.263 START TEST raid_read_error_test 00:08:57.263 ************************************ 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.orRzIYe9FZ 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80109 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80109 00:08:57.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 80109 ']' 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:57.263 04:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.263 [2024-12-14 04:58:08.124821] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:57.263 [2024-12-14 04:58:08.124956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80109 ] 00:08:57.523 [2024-12-14 04:58:08.286241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.523 [2024-12-14 04:58:08.332048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.523 [2024-12-14 04:58:08.374090] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.523 [2024-12-14 04:58:08.374122] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.092 04:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:58.092 04:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:58.092 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:58.092 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:58.092 04:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.092 04:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.092 BaseBdev1_malloc 00:08:58.092 04:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.092 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:58.092 04:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.092 04:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.351 true 00:08:58.351 04:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.351 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:58.351 04:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.351 04:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.351 [2024-12-14 04:58:08.976400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:58.351 [2024-12-14 04:58:08.976454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.351 [2024-12-14 04:58:08.976480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:58.351 [2024-12-14 04:58:08.976491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.351 [2024-12-14 04:58:08.978597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.351 [2024-12-14 04:58:08.978633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:58.351 BaseBdev1 00:08:58.351 04:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.351 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:58.351 04:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:58.351 04:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.351 04:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.351 BaseBdev2_malloc 00:08:58.351 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.351 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:58.351 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.352 true 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.352 [2024-12-14 04:58:09.027038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:58.352 [2024-12-14 04:58:09.027086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.352 [2024-12-14 04:58:09.027105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:58.352 [2024-12-14 04:58:09.027114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.352 [2024-12-14 04:58:09.029141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.352 [2024-12-14 04:58:09.029185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:58.352 BaseBdev2 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.352 BaseBdev3_malloc 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.352 true 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.352 [2024-12-14 04:58:09.067704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:58.352 [2024-12-14 04:58:09.067750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.352 [2024-12-14 04:58:09.067768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:58.352 [2024-12-14 04:58:09.067777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.352 [2024-12-14 04:58:09.069748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.352 [2024-12-14 04:58:09.069785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:58.352 BaseBdev3 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.352 [2024-12-14 04:58:09.079744] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.352 [2024-12-14 04:58:09.081510] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.352 [2024-12-14 04:58:09.081595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:58.352 [2024-12-14 04:58:09.081755] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:58.352 [2024-12-14 04:58:09.081769] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:58.352 [2024-12-14 04:58:09.082033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:58.352 [2024-12-14 04:58:09.082224] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:58.352 [2024-12-14 04:58:09.082250] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:58.352 [2024-12-14 04:58:09.082391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.352 "name": "raid_bdev1", 00:08:58.352 "uuid": "daaf18c4-e50f-43fe-9f50-eaa36d129541", 00:08:58.352 "strip_size_kb": 0, 00:08:58.352 "state": "online", 00:08:58.352 "raid_level": "raid1", 00:08:58.352 "superblock": true, 00:08:58.352 "num_base_bdevs": 3, 00:08:58.352 "num_base_bdevs_discovered": 3, 00:08:58.352 "num_base_bdevs_operational": 3, 00:08:58.352 "base_bdevs_list": [ 00:08:58.352 { 00:08:58.352 "name": "BaseBdev1", 00:08:58.352 "uuid": "9772a6e6-4ab0-53d0-ae15-3e868c341644", 00:08:58.352 "is_configured": true, 00:08:58.352 "data_offset": 2048, 00:08:58.352 "data_size": 63488 00:08:58.352 }, 00:08:58.352 { 00:08:58.352 "name": "BaseBdev2", 00:08:58.352 "uuid": "f9d82faf-19ac-5432-a4b4-aeec109a5b49", 00:08:58.352 "is_configured": true, 00:08:58.352 "data_offset": 2048, 00:08:58.352 "data_size": 63488 00:08:58.352 }, 00:08:58.352 { 00:08:58.352 "name": "BaseBdev3", 00:08:58.352 "uuid": "c335c70c-eed8-5233-986d-f4f23df346a0", 00:08:58.352 "is_configured": true, 00:08:58.352 "data_offset": 2048, 00:08:58.352 "data_size": 63488 00:08:58.352 } 00:08:58.352 ] 00:08:58.352 }' 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.352 04:58:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.921 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:58.921 04:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:58.921 [2024-12-14 04:58:09.635357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:59.862 04:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:59.862 04:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.862 04:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.862 04:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.862 04:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:59.862 04:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:59.862 04:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:59.862 04:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:59.862 04:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:59.862 04:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.862 04:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.862 04:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:59.862 04:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:59.863 04:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.863 04:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.863 04:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.863 04:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.863 04:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.863 04:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.863 04:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.863 04:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.863 04:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.863 04:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.863 04:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.863 "name": "raid_bdev1", 00:08:59.863 "uuid": "daaf18c4-e50f-43fe-9f50-eaa36d129541", 00:08:59.863 "strip_size_kb": 0, 00:08:59.863 "state": "online", 00:08:59.863 "raid_level": "raid1", 00:08:59.863 "superblock": true, 00:08:59.863 "num_base_bdevs": 3, 00:08:59.863 "num_base_bdevs_discovered": 3, 00:08:59.863 "num_base_bdevs_operational": 3, 00:08:59.863 "base_bdevs_list": [ 00:08:59.863 { 00:08:59.863 "name": "BaseBdev1", 00:08:59.863 "uuid": "9772a6e6-4ab0-53d0-ae15-3e868c341644", 00:08:59.863 "is_configured": true, 00:08:59.863 "data_offset": 2048, 00:08:59.863 "data_size": 63488 00:08:59.863 }, 00:08:59.863 { 00:08:59.863 "name": "BaseBdev2", 00:08:59.863 "uuid": "f9d82faf-19ac-5432-a4b4-aeec109a5b49", 00:08:59.863 "is_configured": true, 00:08:59.863 "data_offset": 2048, 00:08:59.863 "data_size": 63488 00:08:59.863 }, 00:08:59.863 { 00:08:59.863 "name": "BaseBdev3", 00:08:59.863 "uuid": "c335c70c-eed8-5233-986d-f4f23df346a0", 00:08:59.863 "is_configured": true, 00:08:59.863 "data_offset": 2048, 00:08:59.863 "data_size": 63488 00:08:59.863 } 00:08:59.863 ] 00:08:59.863 }' 00:08:59.863 04:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.863 04:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.123 04:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:00.123 04:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.123 04:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.123 [2024-12-14 04:58:10.985729] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:00.123 [2024-12-14 04:58:10.985766] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:00.123 [2024-12-14 04:58:10.988130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.123 [2024-12-14 04:58:10.988198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.123 [2024-12-14 04:58:10.988308] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.123 [2024-12-14 04:58:10.988335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:00.123 { 00:09:00.123 "results": [ 00:09:00.123 { 00:09:00.123 "job": "raid_bdev1", 00:09:00.123 "core_mask": "0x1", 00:09:00.123 "workload": "randrw", 00:09:00.123 "percentage": 50, 00:09:00.123 "status": "finished", 00:09:00.123 "queue_depth": 1, 00:09:00.123 "io_size": 131072, 00:09:00.123 "runtime": 1.351176, 00:09:00.123 "iops": 15057.253829256884, 00:09:00.123 "mibps": 1882.1567286571105, 00:09:00.123 "io_failed": 0, 00:09:00.123 "io_timeout": 0, 00:09:00.123 "avg_latency_us": 63.982284972864385, 00:09:00.123 "min_latency_us": 21.575545851528386, 00:09:00.123 "max_latency_us": 1423.7624454148472 00:09:00.123 } 00:09:00.123 ], 00:09:00.123 "core_count": 1 00:09:00.123 } 00:09:00.123 04:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.123 04:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80109 00:09:00.123 04:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 80109 ']' 00:09:00.123 04:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 80109 00:09:00.123 04:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:00.123 04:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:00.123 04:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80109 00:09:00.383 killing process with pid 80109 00:09:00.383 04:58:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:00.383 04:58:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:00.383 04:58:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80109' 00:09:00.383 04:58:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 80109 00:09:00.383 [2024-12-14 04:58:11.025616] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:00.383 04:58:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 80109 00:09:00.383 [2024-12-14 04:58:11.051725] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:00.642 04:58:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.orRzIYe9FZ 00:09:00.642 04:58:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:00.642 04:58:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:00.642 04:58:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:00.642 04:58:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:00.642 04:58:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:00.642 04:58:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:00.643 04:58:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:00.643 00:09:00.643 real 0m3.270s 00:09:00.643 user 0m4.142s 00:09:00.643 sys 0m0.529s 00:09:00.643 04:58:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.643 04:58:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.643 ************************************ 00:09:00.643 END TEST raid_read_error_test 00:09:00.643 ************************************ 00:09:00.643 04:58:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:00.643 04:58:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:00.643 04:58:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.643 04:58:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:00.643 ************************************ 00:09:00.643 START TEST raid_write_error_test 00:09:00.643 ************************************ 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4glHKWo1Ew 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80238 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80238 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 80238 ']' 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.643 04:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.643 [2024-12-14 04:58:11.468691] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:00.643 [2024-12-14 04:58:11.468828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80238 ] 00:09:00.903 [2024-12-14 04:58:11.629111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.903 [2024-12-14 04:58:11.674414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.903 [2024-12-14 04:58:11.716164] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.903 [2024-12-14 04:58:11.716212] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.472 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:01.472 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:01.472 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:01.472 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:01.472 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.472 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.472 BaseBdev1_malloc 00:09:01.472 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.472 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:01.472 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.472 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.472 true 00:09:01.472 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.472 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:01.472 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.472 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.472 [2024-12-14 04:58:12.330110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:01.472 [2024-12-14 04:58:12.330182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.472 [2024-12-14 04:58:12.330203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:01.472 [2024-12-14 04:58:12.330211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.472 [2024-12-14 04:58:12.332273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.472 [2024-12-14 04:58:12.332306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:01.472 BaseBdev1 00:09:01.472 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.472 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:01.472 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:01.472 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.472 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.733 BaseBdev2_malloc 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.733 true 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.733 [2024-12-14 04:58:12.388028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:01.733 [2024-12-14 04:58:12.388093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.733 [2024-12-14 04:58:12.388121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:01.733 [2024-12-14 04:58:12.388135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.733 [2024-12-14 04:58:12.391347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.733 [2024-12-14 04:58:12.391393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:01.733 BaseBdev2 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.733 BaseBdev3_malloc 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.733 true 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.733 [2024-12-14 04:58:12.428717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:01.733 [2024-12-14 04:58:12.428758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.733 [2024-12-14 04:58:12.428792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:01.733 [2024-12-14 04:58:12.428800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.733 [2024-12-14 04:58:12.430748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.733 [2024-12-14 04:58:12.430781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:01.733 BaseBdev3 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.733 [2024-12-14 04:58:12.440751] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:01.733 [2024-12-14 04:58:12.442498] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.733 [2024-12-14 04:58:12.442595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:01.733 [2024-12-14 04:58:12.442766] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:01.733 [2024-12-14 04:58:12.442781] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:01.733 [2024-12-14 04:58:12.443028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:01.733 [2024-12-14 04:58:12.443194] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:01.733 [2024-12-14 04:58:12.443231] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:01.733 [2024-12-14 04:58:12.443365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.733 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.733 "name": "raid_bdev1", 00:09:01.733 "uuid": "d4ca1161-942b-4ac5-a02b-fce0d6617e11", 00:09:01.733 "strip_size_kb": 0, 00:09:01.733 "state": "online", 00:09:01.733 "raid_level": "raid1", 00:09:01.733 "superblock": true, 00:09:01.733 "num_base_bdevs": 3, 00:09:01.733 "num_base_bdevs_discovered": 3, 00:09:01.733 "num_base_bdevs_operational": 3, 00:09:01.733 "base_bdevs_list": [ 00:09:01.733 { 00:09:01.733 "name": "BaseBdev1", 00:09:01.733 "uuid": "1a0b2862-47ed-5a68-a59c-32d0048a4b66", 00:09:01.733 "is_configured": true, 00:09:01.733 "data_offset": 2048, 00:09:01.733 "data_size": 63488 00:09:01.733 }, 00:09:01.733 { 00:09:01.733 "name": "BaseBdev2", 00:09:01.733 "uuid": "5be7079f-12d5-505f-8eaa-722c53843825", 00:09:01.733 "is_configured": true, 00:09:01.733 "data_offset": 2048, 00:09:01.733 "data_size": 63488 00:09:01.733 }, 00:09:01.733 { 00:09:01.733 "name": "BaseBdev3", 00:09:01.733 "uuid": "c5c3c375-1498-50e4-9f39-17f791c1c49d", 00:09:01.733 "is_configured": true, 00:09:01.733 "data_offset": 2048, 00:09:01.734 "data_size": 63488 00:09:01.734 } 00:09:01.734 ] 00:09:01.734 }' 00:09:01.734 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.734 04:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.305 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:02.305 04:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:02.305 [2024-12-14 04:58:12.984207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.245 [2024-12-14 04:58:13.902832] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:03.245 [2024-12-14 04:58:13.902886] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:03.245 [2024-12-14 04:58:13.903088] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.245 "name": "raid_bdev1", 00:09:03.245 "uuid": "d4ca1161-942b-4ac5-a02b-fce0d6617e11", 00:09:03.245 "strip_size_kb": 0, 00:09:03.245 "state": "online", 00:09:03.245 "raid_level": "raid1", 00:09:03.245 "superblock": true, 00:09:03.245 "num_base_bdevs": 3, 00:09:03.245 "num_base_bdevs_discovered": 2, 00:09:03.245 "num_base_bdevs_operational": 2, 00:09:03.245 "base_bdevs_list": [ 00:09:03.245 { 00:09:03.245 "name": null, 00:09:03.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.245 "is_configured": false, 00:09:03.245 "data_offset": 0, 00:09:03.245 "data_size": 63488 00:09:03.245 }, 00:09:03.245 { 00:09:03.245 "name": "BaseBdev2", 00:09:03.245 "uuid": "5be7079f-12d5-505f-8eaa-722c53843825", 00:09:03.245 "is_configured": true, 00:09:03.245 "data_offset": 2048, 00:09:03.245 "data_size": 63488 00:09:03.245 }, 00:09:03.245 { 00:09:03.245 "name": "BaseBdev3", 00:09:03.245 "uuid": "c5c3c375-1498-50e4-9f39-17f791c1c49d", 00:09:03.245 "is_configured": true, 00:09:03.245 "data_offset": 2048, 00:09:03.245 "data_size": 63488 00:09:03.245 } 00:09:03.245 ] 00:09:03.245 }' 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.245 04:58:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.504 04:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:03.504 04:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.504 04:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.504 [2024-12-14 04:58:14.361106] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:03.504 [2024-12-14 04:58:14.361141] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:03.504 [2024-12-14 04:58:14.363584] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.504 [2024-12-14 04:58:14.363641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.504 [2024-12-14 04:58:14.363726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:03.504 [2024-12-14 04:58:14.363735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:03.504 { 00:09:03.504 "results": [ 00:09:03.504 { 00:09:03.504 "job": "raid_bdev1", 00:09:03.504 "core_mask": "0x1", 00:09:03.504 "workload": "randrw", 00:09:03.504 "percentage": 50, 00:09:03.504 "status": "finished", 00:09:03.504 "queue_depth": 1, 00:09:03.504 "io_size": 131072, 00:09:03.504 "runtime": 1.377832, 00:09:03.504 "iops": 16584.750535624084, 00:09:03.504 "mibps": 2073.0938169530104, 00:09:03.504 "io_failed": 0, 00:09:03.504 "io_timeout": 0, 00:09:03.504 "avg_latency_us": 57.842465686670764, 00:09:03.504 "min_latency_us": 21.687336244541484, 00:09:03.504 "max_latency_us": 1352.216593886463 00:09:03.504 } 00:09:03.504 ], 00:09:03.504 "core_count": 1 00:09:03.504 } 00:09:03.504 04:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.504 04:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80238 00:09:03.504 04:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 80238 ']' 00:09:03.504 04:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 80238 00:09:03.504 04:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:03.504 04:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:03.504 04:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80238 00:09:03.765 04:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:03.765 killing process with pid 80238 00:09:03.765 04:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:03.765 04:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80238' 00:09:03.765 04:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 80238 00:09:03.765 04:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 80238 00:09:03.765 [2024-12-14 04:58:14.409541] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:03.765 [2024-12-14 04:58:14.435162] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:04.025 04:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4glHKWo1Ew 00:09:04.025 04:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:04.025 04:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:04.025 04:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:04.025 04:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:04.025 04:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:04.025 04:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:04.025 04:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:04.025 00:09:04.025 real 0m3.307s 00:09:04.025 user 0m4.211s 00:09:04.025 sys 0m0.512s 00:09:04.025 04:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.025 04:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.025 ************************************ 00:09:04.025 END TEST raid_write_error_test 00:09:04.025 ************************************ 00:09:04.025 04:58:14 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:04.025 04:58:14 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:04.025 04:58:14 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:04.025 04:58:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:04.025 04:58:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:04.025 04:58:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:04.025 ************************************ 00:09:04.025 START TEST raid_state_function_test 00:09:04.025 ************************************ 00:09:04.025 04:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:09:04.025 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:04.025 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:04.025 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:04.025 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:04.025 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:04.025 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.025 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:04.025 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:04.025 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.025 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:04.025 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:04.025 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80365 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:04.026 Process raid pid: 80365 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80365' 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80365 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80365 ']' 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:04.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:04.026 04:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.026 [2024-12-14 04:58:14.842814] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:04.026 [2024-12-14 04:58:14.842950] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.286 [2024-12-14 04:58:15.002962] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.286 [2024-12-14 04:58:15.048265] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.286 [2024-12-14 04:58:15.089927] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.286 [2024-12-14 04:58:15.089968] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.854 [2024-12-14 04:58:15.667244] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.854 [2024-12-14 04:58:15.667290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.854 [2024-12-14 04:58:15.667301] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:04.854 [2024-12-14 04:58:15.667311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:04.854 [2024-12-14 04:58:15.667317] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:04.854 [2024-12-14 04:58:15.667329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:04.854 [2024-12-14 04:58:15.667335] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:04.854 [2024-12-14 04:58:15.667343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.854 "name": "Existed_Raid", 00:09:04.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.854 "strip_size_kb": 64, 00:09:04.854 "state": "configuring", 00:09:04.854 "raid_level": "raid0", 00:09:04.854 "superblock": false, 00:09:04.854 "num_base_bdevs": 4, 00:09:04.854 "num_base_bdevs_discovered": 0, 00:09:04.854 "num_base_bdevs_operational": 4, 00:09:04.854 "base_bdevs_list": [ 00:09:04.854 { 00:09:04.854 "name": "BaseBdev1", 00:09:04.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.854 "is_configured": false, 00:09:04.854 "data_offset": 0, 00:09:04.854 "data_size": 0 00:09:04.854 }, 00:09:04.854 { 00:09:04.854 "name": "BaseBdev2", 00:09:04.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.854 "is_configured": false, 00:09:04.854 "data_offset": 0, 00:09:04.854 "data_size": 0 00:09:04.854 }, 00:09:04.854 { 00:09:04.854 "name": "BaseBdev3", 00:09:04.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.854 "is_configured": false, 00:09:04.854 "data_offset": 0, 00:09:04.854 "data_size": 0 00:09:04.854 }, 00:09:04.854 { 00:09:04.854 "name": "BaseBdev4", 00:09:04.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.854 "is_configured": false, 00:09:04.854 "data_offset": 0, 00:09:04.854 "data_size": 0 00:09:04.854 } 00:09:04.854 ] 00:09:04.854 }' 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.854 04:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.423 [2024-12-14 04:58:16.082402] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.423 [2024-12-14 04:58:16.082447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.423 [2024-12-14 04:58:16.094432] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:05.423 [2024-12-14 04:58:16.094470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:05.423 [2024-12-14 04:58:16.094478] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.423 [2024-12-14 04:58:16.094486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.423 [2024-12-14 04:58:16.094492] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.423 [2024-12-14 04:58:16.094499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.423 [2024-12-14 04:58:16.094505] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:05.423 [2024-12-14 04:58:16.094512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.423 [2024-12-14 04:58:16.115026] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.423 BaseBdev1 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.423 [ 00:09:05.423 { 00:09:05.423 "name": "BaseBdev1", 00:09:05.423 "aliases": [ 00:09:05.423 "9512b0f3-60c4-42ab-beab-06efadacba10" 00:09:05.423 ], 00:09:05.423 "product_name": "Malloc disk", 00:09:05.423 "block_size": 512, 00:09:05.423 "num_blocks": 65536, 00:09:05.423 "uuid": "9512b0f3-60c4-42ab-beab-06efadacba10", 00:09:05.423 "assigned_rate_limits": { 00:09:05.423 "rw_ios_per_sec": 0, 00:09:05.423 "rw_mbytes_per_sec": 0, 00:09:05.423 "r_mbytes_per_sec": 0, 00:09:05.423 "w_mbytes_per_sec": 0 00:09:05.423 }, 00:09:05.423 "claimed": true, 00:09:05.423 "claim_type": "exclusive_write", 00:09:05.423 "zoned": false, 00:09:05.423 "supported_io_types": { 00:09:05.423 "read": true, 00:09:05.423 "write": true, 00:09:05.423 "unmap": true, 00:09:05.423 "flush": true, 00:09:05.423 "reset": true, 00:09:05.423 "nvme_admin": false, 00:09:05.423 "nvme_io": false, 00:09:05.423 "nvme_io_md": false, 00:09:05.423 "write_zeroes": true, 00:09:05.423 "zcopy": true, 00:09:05.423 "get_zone_info": false, 00:09:05.423 "zone_management": false, 00:09:05.423 "zone_append": false, 00:09:05.423 "compare": false, 00:09:05.423 "compare_and_write": false, 00:09:05.423 "abort": true, 00:09:05.423 "seek_hole": false, 00:09:05.423 "seek_data": false, 00:09:05.423 "copy": true, 00:09:05.423 "nvme_iov_md": false 00:09:05.423 }, 00:09:05.423 "memory_domains": [ 00:09:05.423 { 00:09:05.423 "dma_device_id": "system", 00:09:05.423 "dma_device_type": 1 00:09:05.423 }, 00:09:05.423 { 00:09:05.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.423 "dma_device_type": 2 00:09:05.423 } 00:09:05.423 ], 00:09:05.423 "driver_specific": {} 00:09:05.423 } 00:09:05.423 ] 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.423 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.424 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.424 "name": "Existed_Raid", 00:09:05.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.424 "strip_size_kb": 64, 00:09:05.424 "state": "configuring", 00:09:05.424 "raid_level": "raid0", 00:09:05.424 "superblock": false, 00:09:05.424 "num_base_bdevs": 4, 00:09:05.424 "num_base_bdevs_discovered": 1, 00:09:05.424 "num_base_bdevs_operational": 4, 00:09:05.424 "base_bdevs_list": [ 00:09:05.424 { 00:09:05.424 "name": "BaseBdev1", 00:09:05.424 "uuid": "9512b0f3-60c4-42ab-beab-06efadacba10", 00:09:05.424 "is_configured": true, 00:09:05.424 "data_offset": 0, 00:09:05.424 "data_size": 65536 00:09:05.424 }, 00:09:05.424 { 00:09:05.424 "name": "BaseBdev2", 00:09:05.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.424 "is_configured": false, 00:09:05.424 "data_offset": 0, 00:09:05.424 "data_size": 0 00:09:05.424 }, 00:09:05.424 { 00:09:05.424 "name": "BaseBdev3", 00:09:05.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.424 "is_configured": false, 00:09:05.424 "data_offset": 0, 00:09:05.424 "data_size": 0 00:09:05.424 }, 00:09:05.424 { 00:09:05.424 "name": "BaseBdev4", 00:09:05.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.424 "is_configured": false, 00:09:05.424 "data_offset": 0, 00:09:05.424 "data_size": 0 00:09:05.424 } 00:09:05.424 ] 00:09:05.424 }' 00:09:05.424 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.424 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.992 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.992 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.992 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.992 [2024-12-14 04:58:16.582258] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.992 [2024-12-14 04:58:16.582309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:05.992 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.992 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:05.992 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.992 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.992 [2024-12-14 04:58:16.590281] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.992 [2024-12-14 04:58:16.592062] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.992 [2024-12-14 04:58:16.592104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.992 [2024-12-14 04:58:16.592113] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.992 [2024-12-14 04:58:16.592121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.992 [2024-12-14 04:58:16.592127] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:05.992 [2024-12-14 04:58:16.592135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:05.992 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.992 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:05.992 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.992 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:05.992 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.992 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.992 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.992 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.992 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:05.992 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.992 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.992 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.992 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.992 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.992 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.993 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.993 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.993 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.993 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.993 "name": "Existed_Raid", 00:09:05.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.993 "strip_size_kb": 64, 00:09:05.993 "state": "configuring", 00:09:05.993 "raid_level": "raid0", 00:09:05.993 "superblock": false, 00:09:05.993 "num_base_bdevs": 4, 00:09:05.993 "num_base_bdevs_discovered": 1, 00:09:05.993 "num_base_bdevs_operational": 4, 00:09:05.993 "base_bdevs_list": [ 00:09:05.993 { 00:09:05.993 "name": "BaseBdev1", 00:09:05.993 "uuid": "9512b0f3-60c4-42ab-beab-06efadacba10", 00:09:05.993 "is_configured": true, 00:09:05.993 "data_offset": 0, 00:09:05.993 "data_size": 65536 00:09:05.993 }, 00:09:05.993 { 00:09:05.993 "name": "BaseBdev2", 00:09:05.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.993 "is_configured": false, 00:09:05.993 "data_offset": 0, 00:09:05.993 "data_size": 0 00:09:05.993 }, 00:09:05.993 { 00:09:05.993 "name": "BaseBdev3", 00:09:05.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.993 "is_configured": false, 00:09:05.993 "data_offset": 0, 00:09:05.993 "data_size": 0 00:09:05.993 }, 00:09:05.993 { 00:09:05.993 "name": "BaseBdev4", 00:09:05.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.993 "is_configured": false, 00:09:05.993 "data_offset": 0, 00:09:05.993 "data_size": 0 00:09:05.993 } 00:09:05.993 ] 00:09:05.993 }' 00:09:05.993 04:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.993 04:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.252 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:06.252 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.252 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.252 [2024-12-14 04:58:17.030793] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:06.252 BaseBdev2 00:09:06.252 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.252 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:06.252 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:06.252 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:06.252 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:06.252 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:06.252 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:06.252 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:06.252 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.252 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.252 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.252 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:06.252 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.252 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.252 [ 00:09:06.252 { 00:09:06.252 "name": "BaseBdev2", 00:09:06.252 "aliases": [ 00:09:06.252 "b9091bc1-6582-474b-9459-c26bd928af74" 00:09:06.252 ], 00:09:06.252 "product_name": "Malloc disk", 00:09:06.252 "block_size": 512, 00:09:06.252 "num_blocks": 65536, 00:09:06.252 "uuid": "b9091bc1-6582-474b-9459-c26bd928af74", 00:09:06.252 "assigned_rate_limits": { 00:09:06.252 "rw_ios_per_sec": 0, 00:09:06.252 "rw_mbytes_per_sec": 0, 00:09:06.252 "r_mbytes_per_sec": 0, 00:09:06.252 "w_mbytes_per_sec": 0 00:09:06.252 }, 00:09:06.252 "claimed": true, 00:09:06.252 "claim_type": "exclusive_write", 00:09:06.252 "zoned": false, 00:09:06.252 "supported_io_types": { 00:09:06.252 "read": true, 00:09:06.252 "write": true, 00:09:06.252 "unmap": true, 00:09:06.252 "flush": true, 00:09:06.252 "reset": true, 00:09:06.252 "nvme_admin": false, 00:09:06.252 "nvme_io": false, 00:09:06.252 "nvme_io_md": false, 00:09:06.252 "write_zeroes": true, 00:09:06.252 "zcopy": true, 00:09:06.252 "get_zone_info": false, 00:09:06.252 "zone_management": false, 00:09:06.252 "zone_append": false, 00:09:06.253 "compare": false, 00:09:06.253 "compare_and_write": false, 00:09:06.253 "abort": true, 00:09:06.253 "seek_hole": false, 00:09:06.253 "seek_data": false, 00:09:06.253 "copy": true, 00:09:06.253 "nvme_iov_md": false 00:09:06.253 }, 00:09:06.253 "memory_domains": [ 00:09:06.253 { 00:09:06.253 "dma_device_id": "system", 00:09:06.253 "dma_device_type": 1 00:09:06.253 }, 00:09:06.253 { 00:09:06.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.253 "dma_device_type": 2 00:09:06.253 } 00:09:06.253 ], 00:09:06.253 "driver_specific": {} 00:09:06.253 } 00:09:06.253 ] 00:09:06.253 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.253 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:06.253 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:06.253 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:06.253 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:06.253 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.253 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.253 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.253 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.253 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:06.253 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.253 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.253 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.253 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.253 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.253 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.253 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.253 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.253 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.253 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.253 "name": "Existed_Raid", 00:09:06.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.253 "strip_size_kb": 64, 00:09:06.253 "state": "configuring", 00:09:06.253 "raid_level": "raid0", 00:09:06.253 "superblock": false, 00:09:06.253 "num_base_bdevs": 4, 00:09:06.253 "num_base_bdevs_discovered": 2, 00:09:06.253 "num_base_bdevs_operational": 4, 00:09:06.253 "base_bdevs_list": [ 00:09:06.253 { 00:09:06.253 "name": "BaseBdev1", 00:09:06.253 "uuid": "9512b0f3-60c4-42ab-beab-06efadacba10", 00:09:06.253 "is_configured": true, 00:09:06.253 "data_offset": 0, 00:09:06.253 "data_size": 65536 00:09:06.253 }, 00:09:06.253 { 00:09:06.253 "name": "BaseBdev2", 00:09:06.253 "uuid": "b9091bc1-6582-474b-9459-c26bd928af74", 00:09:06.253 "is_configured": true, 00:09:06.253 "data_offset": 0, 00:09:06.253 "data_size": 65536 00:09:06.253 }, 00:09:06.253 { 00:09:06.253 "name": "BaseBdev3", 00:09:06.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.253 "is_configured": false, 00:09:06.253 "data_offset": 0, 00:09:06.253 "data_size": 0 00:09:06.253 }, 00:09:06.253 { 00:09:06.253 "name": "BaseBdev4", 00:09:06.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.253 "is_configured": false, 00:09:06.253 "data_offset": 0, 00:09:06.253 "data_size": 0 00:09:06.253 } 00:09:06.253 ] 00:09:06.253 }' 00:09:06.253 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.253 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.821 [2024-12-14 04:58:17.536844] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:06.821 BaseBdev3 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.821 [ 00:09:06.821 { 00:09:06.821 "name": "BaseBdev3", 00:09:06.821 "aliases": [ 00:09:06.821 "77a66e7b-0aa4-4cac-a671-915ed92c6cc3" 00:09:06.821 ], 00:09:06.821 "product_name": "Malloc disk", 00:09:06.821 "block_size": 512, 00:09:06.821 "num_blocks": 65536, 00:09:06.821 "uuid": "77a66e7b-0aa4-4cac-a671-915ed92c6cc3", 00:09:06.821 "assigned_rate_limits": { 00:09:06.821 "rw_ios_per_sec": 0, 00:09:06.821 "rw_mbytes_per_sec": 0, 00:09:06.821 "r_mbytes_per_sec": 0, 00:09:06.821 "w_mbytes_per_sec": 0 00:09:06.821 }, 00:09:06.821 "claimed": true, 00:09:06.821 "claim_type": "exclusive_write", 00:09:06.821 "zoned": false, 00:09:06.821 "supported_io_types": { 00:09:06.821 "read": true, 00:09:06.821 "write": true, 00:09:06.821 "unmap": true, 00:09:06.821 "flush": true, 00:09:06.821 "reset": true, 00:09:06.821 "nvme_admin": false, 00:09:06.821 "nvme_io": false, 00:09:06.821 "nvme_io_md": false, 00:09:06.821 "write_zeroes": true, 00:09:06.821 "zcopy": true, 00:09:06.821 "get_zone_info": false, 00:09:06.821 "zone_management": false, 00:09:06.821 "zone_append": false, 00:09:06.821 "compare": false, 00:09:06.821 "compare_and_write": false, 00:09:06.821 "abort": true, 00:09:06.821 "seek_hole": false, 00:09:06.821 "seek_data": false, 00:09:06.821 "copy": true, 00:09:06.821 "nvme_iov_md": false 00:09:06.821 }, 00:09:06.821 "memory_domains": [ 00:09:06.821 { 00:09:06.821 "dma_device_id": "system", 00:09:06.821 "dma_device_type": 1 00:09:06.821 }, 00:09:06.821 { 00:09:06.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.821 "dma_device_type": 2 00:09:06.821 } 00:09:06.821 ], 00:09:06.821 "driver_specific": {} 00:09:06.821 } 00:09:06.821 ] 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.821 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.822 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.822 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:06.822 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.822 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.822 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.822 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.822 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.822 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.822 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.822 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.822 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.822 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.822 "name": "Existed_Raid", 00:09:06.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.822 "strip_size_kb": 64, 00:09:06.822 "state": "configuring", 00:09:06.822 "raid_level": "raid0", 00:09:06.822 "superblock": false, 00:09:06.822 "num_base_bdevs": 4, 00:09:06.822 "num_base_bdevs_discovered": 3, 00:09:06.822 "num_base_bdevs_operational": 4, 00:09:06.822 "base_bdevs_list": [ 00:09:06.822 { 00:09:06.822 "name": "BaseBdev1", 00:09:06.822 "uuid": "9512b0f3-60c4-42ab-beab-06efadacba10", 00:09:06.822 "is_configured": true, 00:09:06.822 "data_offset": 0, 00:09:06.822 "data_size": 65536 00:09:06.822 }, 00:09:06.822 { 00:09:06.822 "name": "BaseBdev2", 00:09:06.822 "uuid": "b9091bc1-6582-474b-9459-c26bd928af74", 00:09:06.822 "is_configured": true, 00:09:06.822 "data_offset": 0, 00:09:06.822 "data_size": 65536 00:09:06.822 }, 00:09:06.822 { 00:09:06.822 "name": "BaseBdev3", 00:09:06.822 "uuid": "77a66e7b-0aa4-4cac-a671-915ed92c6cc3", 00:09:06.822 "is_configured": true, 00:09:06.822 "data_offset": 0, 00:09:06.822 "data_size": 65536 00:09:06.822 }, 00:09:06.822 { 00:09:06.822 "name": "BaseBdev4", 00:09:06.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.822 "is_configured": false, 00:09:06.822 "data_offset": 0, 00:09:06.822 "data_size": 0 00:09:06.822 } 00:09:06.822 ] 00:09:06.822 }' 00:09:06.822 04:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.822 04:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.389 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:07.389 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.389 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.389 [2024-12-14 04:58:18.014951] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:07.389 [2024-12-14 04:58:18.014999] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:07.389 [2024-12-14 04:58:18.015015] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:07.389 [2024-12-14 04:58:18.015327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:07.389 [2024-12-14 04:58:18.015502] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:07.389 [2024-12-14 04:58:18.015530] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:07.389 [2024-12-14 04:58:18.015734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.389 BaseBdev4 00:09:07.389 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.389 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:07.389 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:07.389 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:07.389 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.390 [ 00:09:07.390 { 00:09:07.390 "name": "BaseBdev4", 00:09:07.390 "aliases": [ 00:09:07.390 "58f2498a-799b-41ec-904e-04458938c02b" 00:09:07.390 ], 00:09:07.390 "product_name": "Malloc disk", 00:09:07.390 "block_size": 512, 00:09:07.390 "num_blocks": 65536, 00:09:07.390 "uuid": "58f2498a-799b-41ec-904e-04458938c02b", 00:09:07.390 "assigned_rate_limits": { 00:09:07.390 "rw_ios_per_sec": 0, 00:09:07.390 "rw_mbytes_per_sec": 0, 00:09:07.390 "r_mbytes_per_sec": 0, 00:09:07.390 "w_mbytes_per_sec": 0 00:09:07.390 }, 00:09:07.390 "claimed": true, 00:09:07.390 "claim_type": "exclusive_write", 00:09:07.390 "zoned": false, 00:09:07.390 "supported_io_types": { 00:09:07.390 "read": true, 00:09:07.390 "write": true, 00:09:07.390 "unmap": true, 00:09:07.390 "flush": true, 00:09:07.390 "reset": true, 00:09:07.390 "nvme_admin": false, 00:09:07.390 "nvme_io": false, 00:09:07.390 "nvme_io_md": false, 00:09:07.390 "write_zeroes": true, 00:09:07.390 "zcopy": true, 00:09:07.390 "get_zone_info": false, 00:09:07.390 "zone_management": false, 00:09:07.390 "zone_append": false, 00:09:07.390 "compare": false, 00:09:07.390 "compare_and_write": false, 00:09:07.390 "abort": true, 00:09:07.390 "seek_hole": false, 00:09:07.390 "seek_data": false, 00:09:07.390 "copy": true, 00:09:07.390 "nvme_iov_md": false 00:09:07.390 }, 00:09:07.390 "memory_domains": [ 00:09:07.390 { 00:09:07.390 "dma_device_id": "system", 00:09:07.390 "dma_device_type": 1 00:09:07.390 }, 00:09:07.390 { 00:09:07.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.390 "dma_device_type": 2 00:09:07.390 } 00:09:07.390 ], 00:09:07.390 "driver_specific": {} 00:09:07.390 } 00:09:07.390 ] 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.390 "name": "Existed_Raid", 00:09:07.390 "uuid": "50841ada-7519-4415-b853-cee1d74dc69d", 00:09:07.390 "strip_size_kb": 64, 00:09:07.390 "state": "online", 00:09:07.390 "raid_level": "raid0", 00:09:07.390 "superblock": false, 00:09:07.390 "num_base_bdevs": 4, 00:09:07.390 "num_base_bdevs_discovered": 4, 00:09:07.390 "num_base_bdevs_operational": 4, 00:09:07.390 "base_bdevs_list": [ 00:09:07.390 { 00:09:07.390 "name": "BaseBdev1", 00:09:07.390 "uuid": "9512b0f3-60c4-42ab-beab-06efadacba10", 00:09:07.390 "is_configured": true, 00:09:07.390 "data_offset": 0, 00:09:07.390 "data_size": 65536 00:09:07.390 }, 00:09:07.390 { 00:09:07.390 "name": "BaseBdev2", 00:09:07.390 "uuid": "b9091bc1-6582-474b-9459-c26bd928af74", 00:09:07.390 "is_configured": true, 00:09:07.390 "data_offset": 0, 00:09:07.390 "data_size": 65536 00:09:07.390 }, 00:09:07.390 { 00:09:07.390 "name": "BaseBdev3", 00:09:07.390 "uuid": "77a66e7b-0aa4-4cac-a671-915ed92c6cc3", 00:09:07.390 "is_configured": true, 00:09:07.390 "data_offset": 0, 00:09:07.390 "data_size": 65536 00:09:07.390 }, 00:09:07.390 { 00:09:07.390 "name": "BaseBdev4", 00:09:07.390 "uuid": "58f2498a-799b-41ec-904e-04458938c02b", 00:09:07.390 "is_configured": true, 00:09:07.390 "data_offset": 0, 00:09:07.390 "data_size": 65536 00:09:07.390 } 00:09:07.390 ] 00:09:07.390 }' 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.390 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.650 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:07.650 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:07.650 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:07.650 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:07.650 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:07.650 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:07.650 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:07.650 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:07.650 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.650 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.650 [2024-12-14 04:58:18.450570] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:07.650 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.650 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:07.650 "name": "Existed_Raid", 00:09:07.650 "aliases": [ 00:09:07.650 "50841ada-7519-4415-b853-cee1d74dc69d" 00:09:07.650 ], 00:09:07.650 "product_name": "Raid Volume", 00:09:07.650 "block_size": 512, 00:09:07.650 "num_blocks": 262144, 00:09:07.650 "uuid": "50841ada-7519-4415-b853-cee1d74dc69d", 00:09:07.650 "assigned_rate_limits": { 00:09:07.650 "rw_ios_per_sec": 0, 00:09:07.650 "rw_mbytes_per_sec": 0, 00:09:07.650 "r_mbytes_per_sec": 0, 00:09:07.650 "w_mbytes_per_sec": 0 00:09:07.650 }, 00:09:07.650 "claimed": false, 00:09:07.650 "zoned": false, 00:09:07.650 "supported_io_types": { 00:09:07.650 "read": true, 00:09:07.650 "write": true, 00:09:07.650 "unmap": true, 00:09:07.650 "flush": true, 00:09:07.650 "reset": true, 00:09:07.650 "nvme_admin": false, 00:09:07.650 "nvme_io": false, 00:09:07.650 "nvme_io_md": false, 00:09:07.650 "write_zeroes": true, 00:09:07.650 "zcopy": false, 00:09:07.650 "get_zone_info": false, 00:09:07.650 "zone_management": false, 00:09:07.650 "zone_append": false, 00:09:07.650 "compare": false, 00:09:07.650 "compare_and_write": false, 00:09:07.650 "abort": false, 00:09:07.650 "seek_hole": false, 00:09:07.650 "seek_data": false, 00:09:07.650 "copy": false, 00:09:07.650 "nvme_iov_md": false 00:09:07.650 }, 00:09:07.650 "memory_domains": [ 00:09:07.650 { 00:09:07.650 "dma_device_id": "system", 00:09:07.650 "dma_device_type": 1 00:09:07.650 }, 00:09:07.650 { 00:09:07.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.650 "dma_device_type": 2 00:09:07.650 }, 00:09:07.650 { 00:09:07.650 "dma_device_id": "system", 00:09:07.650 "dma_device_type": 1 00:09:07.650 }, 00:09:07.650 { 00:09:07.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.650 "dma_device_type": 2 00:09:07.650 }, 00:09:07.650 { 00:09:07.650 "dma_device_id": "system", 00:09:07.650 "dma_device_type": 1 00:09:07.650 }, 00:09:07.650 { 00:09:07.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.650 "dma_device_type": 2 00:09:07.650 }, 00:09:07.650 { 00:09:07.650 "dma_device_id": "system", 00:09:07.650 "dma_device_type": 1 00:09:07.650 }, 00:09:07.650 { 00:09:07.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.650 "dma_device_type": 2 00:09:07.650 } 00:09:07.650 ], 00:09:07.650 "driver_specific": { 00:09:07.650 "raid": { 00:09:07.650 "uuid": "50841ada-7519-4415-b853-cee1d74dc69d", 00:09:07.650 "strip_size_kb": 64, 00:09:07.650 "state": "online", 00:09:07.650 "raid_level": "raid0", 00:09:07.650 "superblock": false, 00:09:07.650 "num_base_bdevs": 4, 00:09:07.650 "num_base_bdevs_discovered": 4, 00:09:07.650 "num_base_bdevs_operational": 4, 00:09:07.650 "base_bdevs_list": [ 00:09:07.650 { 00:09:07.650 "name": "BaseBdev1", 00:09:07.650 "uuid": "9512b0f3-60c4-42ab-beab-06efadacba10", 00:09:07.650 "is_configured": true, 00:09:07.650 "data_offset": 0, 00:09:07.650 "data_size": 65536 00:09:07.650 }, 00:09:07.650 { 00:09:07.650 "name": "BaseBdev2", 00:09:07.650 "uuid": "b9091bc1-6582-474b-9459-c26bd928af74", 00:09:07.650 "is_configured": true, 00:09:07.650 "data_offset": 0, 00:09:07.650 "data_size": 65536 00:09:07.650 }, 00:09:07.650 { 00:09:07.650 "name": "BaseBdev3", 00:09:07.650 "uuid": "77a66e7b-0aa4-4cac-a671-915ed92c6cc3", 00:09:07.650 "is_configured": true, 00:09:07.650 "data_offset": 0, 00:09:07.650 "data_size": 65536 00:09:07.650 }, 00:09:07.650 { 00:09:07.650 "name": "BaseBdev4", 00:09:07.650 "uuid": "58f2498a-799b-41ec-904e-04458938c02b", 00:09:07.650 "is_configured": true, 00:09:07.650 "data_offset": 0, 00:09:07.650 "data_size": 65536 00:09:07.650 } 00:09:07.650 ] 00:09:07.650 } 00:09:07.650 } 00:09:07.650 }' 00:09:07.650 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:07.650 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:07.650 BaseBdev2 00:09:07.650 BaseBdev3 00:09:07.650 BaseBdev4' 00:09:07.650 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.911 [2024-12-14 04:58:18.761772] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:07.911 [2024-12-14 04:58:18.761804] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.911 [2024-12-14 04:58:18.761856] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.911 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.171 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.171 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.171 "name": "Existed_Raid", 00:09:08.171 "uuid": "50841ada-7519-4415-b853-cee1d74dc69d", 00:09:08.171 "strip_size_kb": 64, 00:09:08.171 "state": "offline", 00:09:08.171 "raid_level": "raid0", 00:09:08.171 "superblock": false, 00:09:08.171 "num_base_bdevs": 4, 00:09:08.171 "num_base_bdevs_discovered": 3, 00:09:08.171 "num_base_bdevs_operational": 3, 00:09:08.171 "base_bdevs_list": [ 00:09:08.171 { 00:09:08.171 "name": null, 00:09:08.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.171 "is_configured": false, 00:09:08.171 "data_offset": 0, 00:09:08.171 "data_size": 65536 00:09:08.171 }, 00:09:08.171 { 00:09:08.171 "name": "BaseBdev2", 00:09:08.171 "uuid": "b9091bc1-6582-474b-9459-c26bd928af74", 00:09:08.171 "is_configured": true, 00:09:08.171 "data_offset": 0, 00:09:08.171 "data_size": 65536 00:09:08.171 }, 00:09:08.171 { 00:09:08.171 "name": "BaseBdev3", 00:09:08.171 "uuid": "77a66e7b-0aa4-4cac-a671-915ed92c6cc3", 00:09:08.171 "is_configured": true, 00:09:08.171 "data_offset": 0, 00:09:08.171 "data_size": 65536 00:09:08.171 }, 00:09:08.171 { 00:09:08.171 "name": "BaseBdev4", 00:09:08.171 "uuid": "58f2498a-799b-41ec-904e-04458938c02b", 00:09:08.171 "is_configured": true, 00:09:08.171 "data_offset": 0, 00:09:08.171 "data_size": 65536 00:09:08.171 } 00:09:08.171 ] 00:09:08.171 }' 00:09:08.171 04:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.171 04:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.430 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:08.430 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.430 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.430 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.430 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.430 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.430 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.430 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.430 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.430 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:08.430 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.430 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.430 [2024-12-14 04:58:19.292215] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:08.430 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.430 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.430 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.430 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.430 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.430 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.430 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.697 [2024-12-14 04:58:19.362964] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.697 [2024-12-14 04:58:19.430259] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:08.697 [2024-12-14 04:58:19.430310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.697 BaseBdev2 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.697 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.698 [ 00:09:08.698 { 00:09:08.698 "name": "BaseBdev2", 00:09:08.698 "aliases": [ 00:09:08.698 "2b058751-fa9e-481a-97cd-a853556f8f91" 00:09:08.698 ], 00:09:08.698 "product_name": "Malloc disk", 00:09:08.698 "block_size": 512, 00:09:08.698 "num_blocks": 65536, 00:09:08.698 "uuid": "2b058751-fa9e-481a-97cd-a853556f8f91", 00:09:08.698 "assigned_rate_limits": { 00:09:08.698 "rw_ios_per_sec": 0, 00:09:08.698 "rw_mbytes_per_sec": 0, 00:09:08.698 "r_mbytes_per_sec": 0, 00:09:08.698 "w_mbytes_per_sec": 0 00:09:08.698 }, 00:09:08.698 "claimed": false, 00:09:08.698 "zoned": false, 00:09:08.698 "supported_io_types": { 00:09:08.698 "read": true, 00:09:08.698 "write": true, 00:09:08.698 "unmap": true, 00:09:08.698 "flush": true, 00:09:08.698 "reset": true, 00:09:08.698 "nvme_admin": false, 00:09:08.698 "nvme_io": false, 00:09:08.698 "nvme_io_md": false, 00:09:08.698 "write_zeroes": true, 00:09:08.698 "zcopy": true, 00:09:08.698 "get_zone_info": false, 00:09:08.698 "zone_management": false, 00:09:08.698 "zone_append": false, 00:09:08.698 "compare": false, 00:09:08.698 "compare_and_write": false, 00:09:08.698 "abort": true, 00:09:08.698 "seek_hole": false, 00:09:08.698 "seek_data": false, 00:09:08.698 "copy": true, 00:09:08.698 "nvme_iov_md": false 00:09:08.698 }, 00:09:08.698 "memory_domains": [ 00:09:08.698 { 00:09:08.698 "dma_device_id": "system", 00:09:08.698 "dma_device_type": 1 00:09:08.698 }, 00:09:08.698 { 00:09:08.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.698 "dma_device_type": 2 00:09:08.698 } 00:09:08.698 ], 00:09:08.698 "driver_specific": {} 00:09:08.698 } 00:09:08.698 ] 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.698 BaseBdev3 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.698 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.968 [ 00:09:08.968 { 00:09:08.968 "name": "BaseBdev3", 00:09:08.968 "aliases": [ 00:09:08.968 "c382bb62-3558-4ba4-bff5-ed2a97ef3c19" 00:09:08.968 ], 00:09:08.968 "product_name": "Malloc disk", 00:09:08.968 "block_size": 512, 00:09:08.968 "num_blocks": 65536, 00:09:08.968 "uuid": "c382bb62-3558-4ba4-bff5-ed2a97ef3c19", 00:09:08.968 "assigned_rate_limits": { 00:09:08.968 "rw_ios_per_sec": 0, 00:09:08.968 "rw_mbytes_per_sec": 0, 00:09:08.968 "r_mbytes_per_sec": 0, 00:09:08.968 "w_mbytes_per_sec": 0 00:09:08.968 }, 00:09:08.968 "claimed": false, 00:09:08.968 "zoned": false, 00:09:08.968 "supported_io_types": { 00:09:08.968 "read": true, 00:09:08.968 "write": true, 00:09:08.968 "unmap": true, 00:09:08.968 "flush": true, 00:09:08.968 "reset": true, 00:09:08.968 "nvme_admin": false, 00:09:08.968 "nvme_io": false, 00:09:08.968 "nvme_io_md": false, 00:09:08.968 "write_zeroes": true, 00:09:08.968 "zcopy": true, 00:09:08.968 "get_zone_info": false, 00:09:08.968 "zone_management": false, 00:09:08.968 "zone_append": false, 00:09:08.968 "compare": false, 00:09:08.968 "compare_and_write": false, 00:09:08.968 "abort": true, 00:09:08.968 "seek_hole": false, 00:09:08.968 "seek_data": false, 00:09:08.968 "copy": true, 00:09:08.968 "nvme_iov_md": false 00:09:08.968 }, 00:09:08.968 "memory_domains": [ 00:09:08.968 { 00:09:08.968 "dma_device_id": "system", 00:09:08.968 "dma_device_type": 1 00:09:08.968 }, 00:09:08.968 { 00:09:08.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.968 "dma_device_type": 2 00:09:08.968 } 00:09:08.968 ], 00:09:08.968 "driver_specific": {} 00:09:08.968 } 00:09:08.968 ] 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.968 BaseBdev4 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.968 [ 00:09:08.968 { 00:09:08.968 "name": "BaseBdev4", 00:09:08.968 "aliases": [ 00:09:08.968 "2b3bdabe-940c-4478-b5ba-8e0ea6ffa880" 00:09:08.968 ], 00:09:08.968 "product_name": "Malloc disk", 00:09:08.968 "block_size": 512, 00:09:08.968 "num_blocks": 65536, 00:09:08.968 "uuid": "2b3bdabe-940c-4478-b5ba-8e0ea6ffa880", 00:09:08.968 "assigned_rate_limits": { 00:09:08.968 "rw_ios_per_sec": 0, 00:09:08.968 "rw_mbytes_per_sec": 0, 00:09:08.968 "r_mbytes_per_sec": 0, 00:09:08.968 "w_mbytes_per_sec": 0 00:09:08.968 }, 00:09:08.968 "claimed": false, 00:09:08.968 "zoned": false, 00:09:08.968 "supported_io_types": { 00:09:08.968 "read": true, 00:09:08.968 "write": true, 00:09:08.968 "unmap": true, 00:09:08.968 "flush": true, 00:09:08.968 "reset": true, 00:09:08.968 "nvme_admin": false, 00:09:08.968 "nvme_io": false, 00:09:08.968 "nvme_io_md": false, 00:09:08.968 "write_zeroes": true, 00:09:08.968 "zcopy": true, 00:09:08.968 "get_zone_info": false, 00:09:08.968 "zone_management": false, 00:09:08.968 "zone_append": false, 00:09:08.968 "compare": false, 00:09:08.968 "compare_and_write": false, 00:09:08.968 "abort": true, 00:09:08.968 "seek_hole": false, 00:09:08.968 "seek_data": false, 00:09:08.968 "copy": true, 00:09:08.968 "nvme_iov_md": false 00:09:08.968 }, 00:09:08.968 "memory_domains": [ 00:09:08.968 { 00:09:08.968 "dma_device_id": "system", 00:09:08.968 "dma_device_type": 1 00:09:08.968 }, 00:09:08.968 { 00:09:08.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.968 "dma_device_type": 2 00:09:08.968 } 00:09:08.968 ], 00:09:08.968 "driver_specific": {} 00:09:08.968 } 00:09:08.968 ] 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.968 [2024-12-14 04:58:19.649109] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.968 [2024-12-14 04:58:19.649149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.968 [2024-12-14 04:58:19.649177] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:08.968 [2024-12-14 04:58:19.650879] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.968 [2024-12-14 04:58:19.650934] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.968 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.968 "name": "Existed_Raid", 00:09:08.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.968 "strip_size_kb": 64, 00:09:08.968 "state": "configuring", 00:09:08.968 "raid_level": "raid0", 00:09:08.968 "superblock": false, 00:09:08.968 "num_base_bdevs": 4, 00:09:08.968 "num_base_bdevs_discovered": 3, 00:09:08.968 "num_base_bdevs_operational": 4, 00:09:08.968 "base_bdevs_list": [ 00:09:08.968 { 00:09:08.968 "name": "BaseBdev1", 00:09:08.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.968 "is_configured": false, 00:09:08.968 "data_offset": 0, 00:09:08.968 "data_size": 0 00:09:08.968 }, 00:09:08.968 { 00:09:08.968 "name": "BaseBdev2", 00:09:08.968 "uuid": "2b058751-fa9e-481a-97cd-a853556f8f91", 00:09:08.968 "is_configured": true, 00:09:08.968 "data_offset": 0, 00:09:08.968 "data_size": 65536 00:09:08.968 }, 00:09:08.968 { 00:09:08.968 "name": "BaseBdev3", 00:09:08.968 "uuid": "c382bb62-3558-4ba4-bff5-ed2a97ef3c19", 00:09:08.968 "is_configured": true, 00:09:08.968 "data_offset": 0, 00:09:08.968 "data_size": 65536 00:09:08.968 }, 00:09:08.968 { 00:09:08.968 "name": "BaseBdev4", 00:09:08.968 "uuid": "2b3bdabe-940c-4478-b5ba-8e0ea6ffa880", 00:09:08.969 "is_configured": true, 00:09:08.969 "data_offset": 0, 00:09:08.969 "data_size": 65536 00:09:08.969 } 00:09:08.969 ] 00:09:08.969 }' 00:09:08.969 04:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.969 04:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.254 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:09.514 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.514 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.514 [2024-12-14 04:58:20.140287] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:09.514 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.514 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:09.515 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.515 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.515 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.515 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.515 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:09.515 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.515 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.515 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.515 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.515 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.515 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.515 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.515 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.515 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.515 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.515 "name": "Existed_Raid", 00:09:09.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.515 "strip_size_kb": 64, 00:09:09.515 "state": "configuring", 00:09:09.515 "raid_level": "raid0", 00:09:09.515 "superblock": false, 00:09:09.515 "num_base_bdevs": 4, 00:09:09.515 "num_base_bdevs_discovered": 2, 00:09:09.515 "num_base_bdevs_operational": 4, 00:09:09.515 "base_bdevs_list": [ 00:09:09.515 { 00:09:09.515 "name": "BaseBdev1", 00:09:09.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.515 "is_configured": false, 00:09:09.515 "data_offset": 0, 00:09:09.515 "data_size": 0 00:09:09.515 }, 00:09:09.515 { 00:09:09.515 "name": null, 00:09:09.515 "uuid": "2b058751-fa9e-481a-97cd-a853556f8f91", 00:09:09.515 "is_configured": false, 00:09:09.515 "data_offset": 0, 00:09:09.515 "data_size": 65536 00:09:09.515 }, 00:09:09.515 { 00:09:09.515 "name": "BaseBdev3", 00:09:09.515 "uuid": "c382bb62-3558-4ba4-bff5-ed2a97ef3c19", 00:09:09.515 "is_configured": true, 00:09:09.515 "data_offset": 0, 00:09:09.515 "data_size": 65536 00:09:09.515 }, 00:09:09.515 { 00:09:09.515 "name": "BaseBdev4", 00:09:09.515 "uuid": "2b3bdabe-940c-4478-b5ba-8e0ea6ffa880", 00:09:09.515 "is_configured": true, 00:09:09.515 "data_offset": 0, 00:09:09.515 "data_size": 65536 00:09:09.515 } 00:09:09.515 ] 00:09:09.515 }' 00:09:09.515 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.515 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.775 [2024-12-14 04:58:20.622330] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.775 BaseBdev1 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.775 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.775 [ 00:09:09.775 { 00:09:09.775 "name": "BaseBdev1", 00:09:09.775 "aliases": [ 00:09:09.775 "c4be761e-0166-4103-8995-1bfb80769de4" 00:09:09.775 ], 00:09:09.775 "product_name": "Malloc disk", 00:09:09.775 "block_size": 512, 00:09:09.775 "num_blocks": 65536, 00:09:09.775 "uuid": "c4be761e-0166-4103-8995-1bfb80769de4", 00:09:09.775 "assigned_rate_limits": { 00:09:09.775 "rw_ios_per_sec": 0, 00:09:09.775 "rw_mbytes_per_sec": 0, 00:09:09.775 "r_mbytes_per_sec": 0, 00:09:09.775 "w_mbytes_per_sec": 0 00:09:09.775 }, 00:09:09.775 "claimed": true, 00:09:09.775 "claim_type": "exclusive_write", 00:09:09.775 "zoned": false, 00:09:09.775 "supported_io_types": { 00:09:09.775 "read": true, 00:09:09.775 "write": true, 00:09:09.775 "unmap": true, 00:09:09.775 "flush": true, 00:09:09.775 "reset": true, 00:09:09.775 "nvme_admin": false, 00:09:09.775 "nvme_io": false, 00:09:09.775 "nvme_io_md": false, 00:09:09.775 "write_zeroes": true, 00:09:09.775 "zcopy": true, 00:09:09.775 "get_zone_info": false, 00:09:09.775 "zone_management": false, 00:09:09.775 "zone_append": false, 00:09:09.775 "compare": false, 00:09:09.775 "compare_and_write": false, 00:09:09.775 "abort": true, 00:09:09.775 "seek_hole": false, 00:09:09.775 "seek_data": false, 00:09:09.775 "copy": true, 00:09:09.775 "nvme_iov_md": false 00:09:09.775 }, 00:09:09.775 "memory_domains": [ 00:09:09.775 { 00:09:09.775 "dma_device_id": "system", 00:09:10.035 "dma_device_type": 1 00:09:10.035 }, 00:09:10.035 { 00:09:10.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.035 "dma_device_type": 2 00:09:10.035 } 00:09:10.035 ], 00:09:10.035 "driver_specific": {} 00:09:10.035 } 00:09:10.035 ] 00:09:10.035 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.035 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:10.035 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:10.035 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.035 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.035 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.035 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.035 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:10.035 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.035 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.035 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.035 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.035 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.036 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.036 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.036 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.036 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.036 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.036 "name": "Existed_Raid", 00:09:10.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.036 "strip_size_kb": 64, 00:09:10.036 "state": "configuring", 00:09:10.036 "raid_level": "raid0", 00:09:10.036 "superblock": false, 00:09:10.036 "num_base_bdevs": 4, 00:09:10.036 "num_base_bdevs_discovered": 3, 00:09:10.036 "num_base_bdevs_operational": 4, 00:09:10.036 "base_bdevs_list": [ 00:09:10.036 { 00:09:10.036 "name": "BaseBdev1", 00:09:10.036 "uuid": "c4be761e-0166-4103-8995-1bfb80769de4", 00:09:10.036 "is_configured": true, 00:09:10.036 "data_offset": 0, 00:09:10.036 "data_size": 65536 00:09:10.036 }, 00:09:10.036 { 00:09:10.036 "name": null, 00:09:10.036 "uuid": "2b058751-fa9e-481a-97cd-a853556f8f91", 00:09:10.036 "is_configured": false, 00:09:10.036 "data_offset": 0, 00:09:10.036 "data_size": 65536 00:09:10.036 }, 00:09:10.036 { 00:09:10.036 "name": "BaseBdev3", 00:09:10.036 "uuid": "c382bb62-3558-4ba4-bff5-ed2a97ef3c19", 00:09:10.036 "is_configured": true, 00:09:10.036 "data_offset": 0, 00:09:10.036 "data_size": 65536 00:09:10.036 }, 00:09:10.036 { 00:09:10.036 "name": "BaseBdev4", 00:09:10.036 "uuid": "2b3bdabe-940c-4478-b5ba-8e0ea6ffa880", 00:09:10.036 "is_configured": true, 00:09:10.036 "data_offset": 0, 00:09:10.036 "data_size": 65536 00:09:10.036 } 00:09:10.036 ] 00:09:10.036 }' 00:09:10.036 04:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.036 04:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.296 [2024-12-14 04:58:21.137478] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.296 04:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.556 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.556 "name": "Existed_Raid", 00:09:10.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.556 "strip_size_kb": 64, 00:09:10.556 "state": "configuring", 00:09:10.556 "raid_level": "raid0", 00:09:10.556 "superblock": false, 00:09:10.556 "num_base_bdevs": 4, 00:09:10.556 "num_base_bdevs_discovered": 2, 00:09:10.556 "num_base_bdevs_operational": 4, 00:09:10.556 "base_bdevs_list": [ 00:09:10.556 { 00:09:10.556 "name": "BaseBdev1", 00:09:10.556 "uuid": "c4be761e-0166-4103-8995-1bfb80769de4", 00:09:10.556 "is_configured": true, 00:09:10.556 "data_offset": 0, 00:09:10.556 "data_size": 65536 00:09:10.556 }, 00:09:10.556 { 00:09:10.556 "name": null, 00:09:10.556 "uuid": "2b058751-fa9e-481a-97cd-a853556f8f91", 00:09:10.556 "is_configured": false, 00:09:10.556 "data_offset": 0, 00:09:10.556 "data_size": 65536 00:09:10.556 }, 00:09:10.556 { 00:09:10.556 "name": null, 00:09:10.556 "uuid": "c382bb62-3558-4ba4-bff5-ed2a97ef3c19", 00:09:10.556 "is_configured": false, 00:09:10.556 "data_offset": 0, 00:09:10.556 "data_size": 65536 00:09:10.556 }, 00:09:10.556 { 00:09:10.556 "name": "BaseBdev4", 00:09:10.556 "uuid": "2b3bdabe-940c-4478-b5ba-8e0ea6ffa880", 00:09:10.556 "is_configured": true, 00:09:10.556 "data_offset": 0, 00:09:10.556 "data_size": 65536 00:09:10.556 } 00:09:10.556 ] 00:09:10.556 }' 00:09:10.556 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.556 04:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.817 [2024-12-14 04:58:21.644708] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.817 04:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.077 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.077 "name": "Existed_Raid", 00:09:11.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.077 "strip_size_kb": 64, 00:09:11.077 "state": "configuring", 00:09:11.077 "raid_level": "raid0", 00:09:11.077 "superblock": false, 00:09:11.077 "num_base_bdevs": 4, 00:09:11.077 "num_base_bdevs_discovered": 3, 00:09:11.077 "num_base_bdevs_operational": 4, 00:09:11.077 "base_bdevs_list": [ 00:09:11.077 { 00:09:11.077 "name": "BaseBdev1", 00:09:11.077 "uuid": "c4be761e-0166-4103-8995-1bfb80769de4", 00:09:11.077 "is_configured": true, 00:09:11.077 "data_offset": 0, 00:09:11.077 "data_size": 65536 00:09:11.077 }, 00:09:11.077 { 00:09:11.077 "name": null, 00:09:11.077 "uuid": "2b058751-fa9e-481a-97cd-a853556f8f91", 00:09:11.077 "is_configured": false, 00:09:11.077 "data_offset": 0, 00:09:11.077 "data_size": 65536 00:09:11.077 }, 00:09:11.077 { 00:09:11.077 "name": "BaseBdev3", 00:09:11.077 "uuid": "c382bb62-3558-4ba4-bff5-ed2a97ef3c19", 00:09:11.077 "is_configured": true, 00:09:11.077 "data_offset": 0, 00:09:11.077 "data_size": 65536 00:09:11.077 }, 00:09:11.077 { 00:09:11.077 "name": "BaseBdev4", 00:09:11.077 "uuid": "2b3bdabe-940c-4478-b5ba-8e0ea6ffa880", 00:09:11.077 "is_configured": true, 00:09:11.077 "data_offset": 0, 00:09:11.077 "data_size": 65536 00:09:11.077 } 00:09:11.077 ] 00:09:11.077 }' 00:09:11.077 04:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.077 04:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.337 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:11.337 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.337 04:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.337 04:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.337 04:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.337 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:11.337 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.337 04:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.337 04:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.337 [2024-12-14 04:58:22.139879] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.337 04:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.337 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:11.337 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.337 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.337 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.337 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.337 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:11.337 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.337 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.337 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.337 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.338 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.338 04:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.338 04:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.338 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.338 04:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.338 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.338 "name": "Existed_Raid", 00:09:11.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.338 "strip_size_kb": 64, 00:09:11.338 "state": "configuring", 00:09:11.338 "raid_level": "raid0", 00:09:11.338 "superblock": false, 00:09:11.338 "num_base_bdevs": 4, 00:09:11.338 "num_base_bdevs_discovered": 2, 00:09:11.338 "num_base_bdevs_operational": 4, 00:09:11.338 "base_bdevs_list": [ 00:09:11.338 { 00:09:11.338 "name": null, 00:09:11.338 "uuid": "c4be761e-0166-4103-8995-1bfb80769de4", 00:09:11.338 "is_configured": false, 00:09:11.338 "data_offset": 0, 00:09:11.338 "data_size": 65536 00:09:11.338 }, 00:09:11.338 { 00:09:11.338 "name": null, 00:09:11.338 "uuid": "2b058751-fa9e-481a-97cd-a853556f8f91", 00:09:11.338 "is_configured": false, 00:09:11.338 "data_offset": 0, 00:09:11.338 "data_size": 65536 00:09:11.338 }, 00:09:11.338 { 00:09:11.338 "name": "BaseBdev3", 00:09:11.338 "uuid": "c382bb62-3558-4ba4-bff5-ed2a97ef3c19", 00:09:11.338 "is_configured": true, 00:09:11.338 "data_offset": 0, 00:09:11.338 "data_size": 65536 00:09:11.338 }, 00:09:11.338 { 00:09:11.338 "name": "BaseBdev4", 00:09:11.338 "uuid": "2b3bdabe-940c-4478-b5ba-8e0ea6ffa880", 00:09:11.338 "is_configured": true, 00:09:11.338 "data_offset": 0, 00:09:11.338 "data_size": 65536 00:09:11.338 } 00:09:11.338 ] 00:09:11.338 }' 00:09:11.338 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.338 04:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.906 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.906 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:11.906 04:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.906 04:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.906 04:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.906 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:11.906 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:11.906 04:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.906 04:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.906 [2024-12-14 04:58:22.645537] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.906 04:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.906 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:11.907 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.907 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.907 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.907 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.907 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:11.907 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.907 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.907 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.907 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.907 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.907 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.907 04:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.907 04:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.907 04:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.907 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.907 "name": "Existed_Raid", 00:09:11.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.907 "strip_size_kb": 64, 00:09:11.907 "state": "configuring", 00:09:11.907 "raid_level": "raid0", 00:09:11.907 "superblock": false, 00:09:11.907 "num_base_bdevs": 4, 00:09:11.907 "num_base_bdevs_discovered": 3, 00:09:11.907 "num_base_bdevs_operational": 4, 00:09:11.907 "base_bdevs_list": [ 00:09:11.907 { 00:09:11.907 "name": null, 00:09:11.907 "uuid": "c4be761e-0166-4103-8995-1bfb80769de4", 00:09:11.907 "is_configured": false, 00:09:11.907 "data_offset": 0, 00:09:11.907 "data_size": 65536 00:09:11.907 }, 00:09:11.907 { 00:09:11.907 "name": "BaseBdev2", 00:09:11.907 "uuid": "2b058751-fa9e-481a-97cd-a853556f8f91", 00:09:11.907 "is_configured": true, 00:09:11.907 "data_offset": 0, 00:09:11.907 "data_size": 65536 00:09:11.907 }, 00:09:11.907 { 00:09:11.907 "name": "BaseBdev3", 00:09:11.907 "uuid": "c382bb62-3558-4ba4-bff5-ed2a97ef3c19", 00:09:11.907 "is_configured": true, 00:09:11.907 "data_offset": 0, 00:09:11.907 "data_size": 65536 00:09:11.907 }, 00:09:11.907 { 00:09:11.907 "name": "BaseBdev4", 00:09:11.907 "uuid": "2b3bdabe-940c-4478-b5ba-8e0ea6ffa880", 00:09:11.907 "is_configured": true, 00:09:11.907 "data_offset": 0, 00:09:11.907 "data_size": 65536 00:09:11.907 } 00:09:11.907 ] 00:09:11.907 }' 00:09:11.907 04:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.907 04:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.476 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:12.476 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.476 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.476 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.476 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.476 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:12.476 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.476 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.476 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.476 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:12.476 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.476 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c4be761e-0166-4103-8995-1bfb80769de4 00:09:12.476 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.476 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.476 [2024-12-14 04:58:23.155626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:12.476 [2024-12-14 04:58:23.155676] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:12.476 [2024-12-14 04:58:23.155685] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:12.476 [2024-12-14 04:58:23.155948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:12.476 [2024-12-14 04:58:23.156095] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:12.477 [2024-12-14 04:58:23.156123] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:12.477 [2024-12-14 04:58:23.156320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.477 NewBaseBdev 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.477 [ 00:09:12.477 { 00:09:12.477 "name": "NewBaseBdev", 00:09:12.477 "aliases": [ 00:09:12.477 "c4be761e-0166-4103-8995-1bfb80769de4" 00:09:12.477 ], 00:09:12.477 "product_name": "Malloc disk", 00:09:12.477 "block_size": 512, 00:09:12.477 "num_blocks": 65536, 00:09:12.477 "uuid": "c4be761e-0166-4103-8995-1bfb80769de4", 00:09:12.477 "assigned_rate_limits": { 00:09:12.477 "rw_ios_per_sec": 0, 00:09:12.477 "rw_mbytes_per_sec": 0, 00:09:12.477 "r_mbytes_per_sec": 0, 00:09:12.477 "w_mbytes_per_sec": 0 00:09:12.477 }, 00:09:12.477 "claimed": true, 00:09:12.477 "claim_type": "exclusive_write", 00:09:12.477 "zoned": false, 00:09:12.477 "supported_io_types": { 00:09:12.477 "read": true, 00:09:12.477 "write": true, 00:09:12.477 "unmap": true, 00:09:12.477 "flush": true, 00:09:12.477 "reset": true, 00:09:12.477 "nvme_admin": false, 00:09:12.477 "nvme_io": false, 00:09:12.477 "nvme_io_md": false, 00:09:12.477 "write_zeroes": true, 00:09:12.477 "zcopy": true, 00:09:12.477 "get_zone_info": false, 00:09:12.477 "zone_management": false, 00:09:12.477 "zone_append": false, 00:09:12.477 "compare": false, 00:09:12.477 "compare_and_write": false, 00:09:12.477 "abort": true, 00:09:12.477 "seek_hole": false, 00:09:12.477 "seek_data": false, 00:09:12.477 "copy": true, 00:09:12.477 "nvme_iov_md": false 00:09:12.477 }, 00:09:12.477 "memory_domains": [ 00:09:12.477 { 00:09:12.477 "dma_device_id": "system", 00:09:12.477 "dma_device_type": 1 00:09:12.477 }, 00:09:12.477 { 00:09:12.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.477 "dma_device_type": 2 00:09:12.477 } 00:09:12.477 ], 00:09:12.477 "driver_specific": {} 00:09:12.477 } 00:09:12.477 ] 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.477 "name": "Existed_Raid", 00:09:12.477 "uuid": "41432315-c789-4405-8cf1-85c6aa1bb93c", 00:09:12.477 "strip_size_kb": 64, 00:09:12.477 "state": "online", 00:09:12.477 "raid_level": "raid0", 00:09:12.477 "superblock": false, 00:09:12.477 "num_base_bdevs": 4, 00:09:12.477 "num_base_bdevs_discovered": 4, 00:09:12.477 "num_base_bdevs_operational": 4, 00:09:12.477 "base_bdevs_list": [ 00:09:12.477 { 00:09:12.477 "name": "NewBaseBdev", 00:09:12.477 "uuid": "c4be761e-0166-4103-8995-1bfb80769de4", 00:09:12.477 "is_configured": true, 00:09:12.477 "data_offset": 0, 00:09:12.477 "data_size": 65536 00:09:12.477 }, 00:09:12.477 { 00:09:12.477 "name": "BaseBdev2", 00:09:12.477 "uuid": "2b058751-fa9e-481a-97cd-a853556f8f91", 00:09:12.477 "is_configured": true, 00:09:12.477 "data_offset": 0, 00:09:12.477 "data_size": 65536 00:09:12.477 }, 00:09:12.477 { 00:09:12.477 "name": "BaseBdev3", 00:09:12.477 "uuid": "c382bb62-3558-4ba4-bff5-ed2a97ef3c19", 00:09:12.477 "is_configured": true, 00:09:12.477 "data_offset": 0, 00:09:12.477 "data_size": 65536 00:09:12.477 }, 00:09:12.477 { 00:09:12.477 "name": "BaseBdev4", 00:09:12.477 "uuid": "2b3bdabe-940c-4478-b5ba-8e0ea6ffa880", 00:09:12.477 "is_configured": true, 00:09:12.477 "data_offset": 0, 00:09:12.477 "data_size": 65536 00:09:12.477 } 00:09:12.477 ] 00:09:12.477 }' 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.477 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.047 [2024-12-14 04:58:23.639150] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:13.047 "name": "Existed_Raid", 00:09:13.047 "aliases": [ 00:09:13.047 "41432315-c789-4405-8cf1-85c6aa1bb93c" 00:09:13.047 ], 00:09:13.047 "product_name": "Raid Volume", 00:09:13.047 "block_size": 512, 00:09:13.047 "num_blocks": 262144, 00:09:13.047 "uuid": "41432315-c789-4405-8cf1-85c6aa1bb93c", 00:09:13.047 "assigned_rate_limits": { 00:09:13.047 "rw_ios_per_sec": 0, 00:09:13.047 "rw_mbytes_per_sec": 0, 00:09:13.047 "r_mbytes_per_sec": 0, 00:09:13.047 "w_mbytes_per_sec": 0 00:09:13.047 }, 00:09:13.047 "claimed": false, 00:09:13.047 "zoned": false, 00:09:13.047 "supported_io_types": { 00:09:13.047 "read": true, 00:09:13.047 "write": true, 00:09:13.047 "unmap": true, 00:09:13.047 "flush": true, 00:09:13.047 "reset": true, 00:09:13.047 "nvme_admin": false, 00:09:13.047 "nvme_io": false, 00:09:13.047 "nvme_io_md": false, 00:09:13.047 "write_zeroes": true, 00:09:13.047 "zcopy": false, 00:09:13.047 "get_zone_info": false, 00:09:13.047 "zone_management": false, 00:09:13.047 "zone_append": false, 00:09:13.047 "compare": false, 00:09:13.047 "compare_and_write": false, 00:09:13.047 "abort": false, 00:09:13.047 "seek_hole": false, 00:09:13.047 "seek_data": false, 00:09:13.047 "copy": false, 00:09:13.047 "nvme_iov_md": false 00:09:13.047 }, 00:09:13.047 "memory_domains": [ 00:09:13.047 { 00:09:13.047 "dma_device_id": "system", 00:09:13.047 "dma_device_type": 1 00:09:13.047 }, 00:09:13.047 { 00:09:13.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.047 "dma_device_type": 2 00:09:13.047 }, 00:09:13.047 { 00:09:13.047 "dma_device_id": "system", 00:09:13.047 "dma_device_type": 1 00:09:13.047 }, 00:09:13.047 { 00:09:13.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.047 "dma_device_type": 2 00:09:13.047 }, 00:09:13.047 { 00:09:13.047 "dma_device_id": "system", 00:09:13.047 "dma_device_type": 1 00:09:13.047 }, 00:09:13.047 { 00:09:13.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.047 "dma_device_type": 2 00:09:13.047 }, 00:09:13.047 { 00:09:13.047 "dma_device_id": "system", 00:09:13.047 "dma_device_type": 1 00:09:13.047 }, 00:09:13.047 { 00:09:13.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.047 "dma_device_type": 2 00:09:13.047 } 00:09:13.047 ], 00:09:13.047 "driver_specific": { 00:09:13.047 "raid": { 00:09:13.047 "uuid": "41432315-c789-4405-8cf1-85c6aa1bb93c", 00:09:13.047 "strip_size_kb": 64, 00:09:13.047 "state": "online", 00:09:13.047 "raid_level": "raid0", 00:09:13.047 "superblock": false, 00:09:13.047 "num_base_bdevs": 4, 00:09:13.047 "num_base_bdevs_discovered": 4, 00:09:13.047 "num_base_bdevs_operational": 4, 00:09:13.047 "base_bdevs_list": [ 00:09:13.047 { 00:09:13.047 "name": "NewBaseBdev", 00:09:13.047 "uuid": "c4be761e-0166-4103-8995-1bfb80769de4", 00:09:13.047 "is_configured": true, 00:09:13.047 "data_offset": 0, 00:09:13.047 "data_size": 65536 00:09:13.047 }, 00:09:13.047 { 00:09:13.047 "name": "BaseBdev2", 00:09:13.047 "uuid": "2b058751-fa9e-481a-97cd-a853556f8f91", 00:09:13.047 "is_configured": true, 00:09:13.047 "data_offset": 0, 00:09:13.047 "data_size": 65536 00:09:13.047 }, 00:09:13.047 { 00:09:13.047 "name": "BaseBdev3", 00:09:13.047 "uuid": "c382bb62-3558-4ba4-bff5-ed2a97ef3c19", 00:09:13.047 "is_configured": true, 00:09:13.047 "data_offset": 0, 00:09:13.047 "data_size": 65536 00:09:13.047 }, 00:09:13.047 { 00:09:13.047 "name": "BaseBdev4", 00:09:13.047 "uuid": "2b3bdabe-940c-4478-b5ba-8e0ea6ffa880", 00:09:13.047 "is_configured": true, 00:09:13.047 "data_offset": 0, 00:09:13.047 "data_size": 65536 00:09:13.047 } 00:09:13.047 ] 00:09:13.047 } 00:09:13.047 } 00:09:13.047 }' 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:13.047 BaseBdev2 00:09:13.047 BaseBdev3 00:09:13.047 BaseBdev4' 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.047 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.308 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.308 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.308 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:13.308 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.308 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.308 [2024-12-14 04:58:23.946309] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:13.308 [2024-12-14 04:58:23.946334] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:13.308 [2024-12-14 04:58:23.946403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.308 [2024-12-14 04:58:23.946467] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.308 [2024-12-14 04:58:23.946476] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:13.308 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.308 04:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80365 00:09:13.308 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80365 ']' 00:09:13.308 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80365 00:09:13.308 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:13.308 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:13.308 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80365 00:09:13.308 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:13.308 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:13.308 killing process with pid 80365 00:09:13.308 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80365' 00:09:13.308 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 80365 00:09:13.308 [2024-12-14 04:58:23.989610] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:13.308 04:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 80365 00:09:13.308 [2024-12-14 04:58:24.029787] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:13.567 00:09:13.567 real 0m9.524s 00:09:13.567 user 0m16.374s 00:09:13.567 sys 0m1.915s 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.567 ************************************ 00:09:13.567 END TEST raid_state_function_test 00:09:13.567 ************************************ 00:09:13.567 04:58:24 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:13.567 04:58:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:13.567 04:58:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.567 04:58:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:13.567 ************************************ 00:09:13.567 START TEST raid_state_function_test_sb 00:09:13.567 ************************************ 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81014 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81014' 00:09:13.567 Process raid pid: 81014 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81014 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 81014 ']' 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:13.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:13.567 04:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.567 [2024-12-14 04:58:24.441464] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:13.567 [2024-12-14 04:58:24.441592] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.826 [2024-12-14 04:58:24.603371] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.826 [2024-12-14 04:58:24.648752] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.826 [2024-12-14 04:58:24.690610] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.826 [2024-12-14 04:58:24.690651] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.404 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.405 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:14.405 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:14.405 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.405 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.405 [2024-12-14 04:58:25.264276] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.405 [2024-12-14 04:58:25.264322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.405 [2024-12-14 04:58:25.264333] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.405 [2024-12-14 04:58:25.264343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.405 [2024-12-14 04:58:25.264349] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.405 [2024-12-14 04:58:25.264360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.405 [2024-12-14 04:58:25.264366] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:14.405 [2024-12-14 04:58:25.264375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:14.405 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.405 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:14.405 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.405 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.405 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.405 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.405 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:14.405 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.405 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.405 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.405 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.405 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.405 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.405 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.405 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.671 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.671 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.671 "name": "Existed_Raid", 00:09:14.671 "uuid": "f633f31f-945e-4dcf-97fe-23cc169a06a8", 00:09:14.671 "strip_size_kb": 64, 00:09:14.671 "state": "configuring", 00:09:14.671 "raid_level": "raid0", 00:09:14.671 "superblock": true, 00:09:14.671 "num_base_bdevs": 4, 00:09:14.671 "num_base_bdevs_discovered": 0, 00:09:14.671 "num_base_bdevs_operational": 4, 00:09:14.671 "base_bdevs_list": [ 00:09:14.671 { 00:09:14.671 "name": "BaseBdev1", 00:09:14.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.671 "is_configured": false, 00:09:14.671 "data_offset": 0, 00:09:14.671 "data_size": 0 00:09:14.671 }, 00:09:14.671 { 00:09:14.671 "name": "BaseBdev2", 00:09:14.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.671 "is_configured": false, 00:09:14.671 "data_offset": 0, 00:09:14.671 "data_size": 0 00:09:14.671 }, 00:09:14.671 { 00:09:14.671 "name": "BaseBdev3", 00:09:14.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.671 "is_configured": false, 00:09:14.671 "data_offset": 0, 00:09:14.671 "data_size": 0 00:09:14.671 }, 00:09:14.671 { 00:09:14.671 "name": "BaseBdev4", 00:09:14.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.671 "is_configured": false, 00:09:14.671 "data_offset": 0, 00:09:14.671 "data_size": 0 00:09:14.671 } 00:09:14.671 ] 00:09:14.671 }' 00:09:14.671 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.671 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.929 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.929 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.929 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.929 [2024-12-14 04:58:25.703396] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.929 [2024-12-14 04:58:25.703438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.930 [2024-12-14 04:58:25.715426] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.930 [2024-12-14 04:58:25.715462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.930 [2024-12-14 04:58:25.715470] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.930 [2024-12-14 04:58:25.715479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.930 [2024-12-14 04:58:25.715485] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.930 [2024-12-14 04:58:25.715494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.930 [2024-12-14 04:58:25.715500] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:14.930 [2024-12-14 04:58:25.715508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.930 [2024-12-14 04:58:25.736210] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.930 BaseBdev1 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.930 [ 00:09:14.930 { 00:09:14.930 "name": "BaseBdev1", 00:09:14.930 "aliases": [ 00:09:14.930 "8f05f7e8-97e6-4446-93a6-8768175046e9" 00:09:14.930 ], 00:09:14.930 "product_name": "Malloc disk", 00:09:14.930 "block_size": 512, 00:09:14.930 "num_blocks": 65536, 00:09:14.930 "uuid": "8f05f7e8-97e6-4446-93a6-8768175046e9", 00:09:14.930 "assigned_rate_limits": { 00:09:14.930 "rw_ios_per_sec": 0, 00:09:14.930 "rw_mbytes_per_sec": 0, 00:09:14.930 "r_mbytes_per_sec": 0, 00:09:14.930 "w_mbytes_per_sec": 0 00:09:14.930 }, 00:09:14.930 "claimed": true, 00:09:14.930 "claim_type": "exclusive_write", 00:09:14.930 "zoned": false, 00:09:14.930 "supported_io_types": { 00:09:14.930 "read": true, 00:09:14.930 "write": true, 00:09:14.930 "unmap": true, 00:09:14.930 "flush": true, 00:09:14.930 "reset": true, 00:09:14.930 "nvme_admin": false, 00:09:14.930 "nvme_io": false, 00:09:14.930 "nvme_io_md": false, 00:09:14.930 "write_zeroes": true, 00:09:14.930 "zcopy": true, 00:09:14.930 "get_zone_info": false, 00:09:14.930 "zone_management": false, 00:09:14.930 "zone_append": false, 00:09:14.930 "compare": false, 00:09:14.930 "compare_and_write": false, 00:09:14.930 "abort": true, 00:09:14.930 "seek_hole": false, 00:09:14.930 "seek_data": false, 00:09:14.930 "copy": true, 00:09:14.930 "nvme_iov_md": false 00:09:14.930 }, 00:09:14.930 "memory_domains": [ 00:09:14.930 { 00:09:14.930 "dma_device_id": "system", 00:09:14.930 "dma_device_type": 1 00:09:14.930 }, 00:09:14.930 { 00:09:14.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.930 "dma_device_type": 2 00:09:14.930 } 00:09:14.930 ], 00:09:14.930 "driver_specific": {} 00:09:14.930 } 00:09:14.930 ] 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.930 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.190 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.190 "name": "Existed_Raid", 00:09:15.190 "uuid": "85e5d63f-47f1-42a5-bcbc-c5ec66f6f727", 00:09:15.190 "strip_size_kb": 64, 00:09:15.190 "state": "configuring", 00:09:15.190 "raid_level": "raid0", 00:09:15.190 "superblock": true, 00:09:15.190 "num_base_bdevs": 4, 00:09:15.190 "num_base_bdevs_discovered": 1, 00:09:15.190 "num_base_bdevs_operational": 4, 00:09:15.190 "base_bdevs_list": [ 00:09:15.190 { 00:09:15.190 "name": "BaseBdev1", 00:09:15.190 "uuid": "8f05f7e8-97e6-4446-93a6-8768175046e9", 00:09:15.190 "is_configured": true, 00:09:15.190 "data_offset": 2048, 00:09:15.190 "data_size": 63488 00:09:15.190 }, 00:09:15.190 { 00:09:15.190 "name": "BaseBdev2", 00:09:15.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.190 "is_configured": false, 00:09:15.190 "data_offset": 0, 00:09:15.190 "data_size": 0 00:09:15.190 }, 00:09:15.190 { 00:09:15.190 "name": "BaseBdev3", 00:09:15.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.190 "is_configured": false, 00:09:15.190 "data_offset": 0, 00:09:15.190 "data_size": 0 00:09:15.190 }, 00:09:15.190 { 00:09:15.190 "name": "BaseBdev4", 00:09:15.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.190 "is_configured": false, 00:09:15.190 "data_offset": 0, 00:09:15.190 "data_size": 0 00:09:15.190 } 00:09:15.190 ] 00:09:15.190 }' 00:09:15.190 04:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.190 04:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.450 [2024-12-14 04:58:26.179461] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:15.450 [2024-12-14 04:58:26.179507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.450 [2024-12-14 04:58:26.191492] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.450 [2024-12-14 04:58:26.193348] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:15.450 [2024-12-14 04:58:26.193387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:15.450 [2024-12-14 04:58:26.193396] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:15.450 [2024-12-14 04:58:26.193404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:15.450 [2024-12-14 04:58:26.193411] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:15.450 [2024-12-14 04:58:26.193419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.450 "name": "Existed_Raid", 00:09:15.450 "uuid": "20dec730-be50-42ca-b031-f7df211ac41e", 00:09:15.450 "strip_size_kb": 64, 00:09:15.450 "state": "configuring", 00:09:15.450 "raid_level": "raid0", 00:09:15.450 "superblock": true, 00:09:15.450 "num_base_bdevs": 4, 00:09:15.450 "num_base_bdevs_discovered": 1, 00:09:15.450 "num_base_bdevs_operational": 4, 00:09:15.450 "base_bdevs_list": [ 00:09:15.450 { 00:09:15.450 "name": "BaseBdev1", 00:09:15.450 "uuid": "8f05f7e8-97e6-4446-93a6-8768175046e9", 00:09:15.450 "is_configured": true, 00:09:15.450 "data_offset": 2048, 00:09:15.450 "data_size": 63488 00:09:15.450 }, 00:09:15.450 { 00:09:15.450 "name": "BaseBdev2", 00:09:15.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.450 "is_configured": false, 00:09:15.450 "data_offset": 0, 00:09:15.450 "data_size": 0 00:09:15.450 }, 00:09:15.450 { 00:09:15.450 "name": "BaseBdev3", 00:09:15.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.450 "is_configured": false, 00:09:15.450 "data_offset": 0, 00:09:15.450 "data_size": 0 00:09:15.450 }, 00:09:15.450 { 00:09:15.450 "name": "BaseBdev4", 00:09:15.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.450 "is_configured": false, 00:09:15.450 "data_offset": 0, 00:09:15.450 "data_size": 0 00:09:15.450 } 00:09:15.450 ] 00:09:15.450 }' 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.450 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.020 [2024-12-14 04:58:26.680264] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:16.020 BaseBdev2 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.020 [ 00:09:16.020 { 00:09:16.020 "name": "BaseBdev2", 00:09:16.020 "aliases": [ 00:09:16.020 "9af3d723-6cda-4eae-b1e7-c9647c794c74" 00:09:16.020 ], 00:09:16.020 "product_name": "Malloc disk", 00:09:16.020 "block_size": 512, 00:09:16.020 "num_blocks": 65536, 00:09:16.020 "uuid": "9af3d723-6cda-4eae-b1e7-c9647c794c74", 00:09:16.020 "assigned_rate_limits": { 00:09:16.020 "rw_ios_per_sec": 0, 00:09:16.020 "rw_mbytes_per_sec": 0, 00:09:16.020 "r_mbytes_per_sec": 0, 00:09:16.020 "w_mbytes_per_sec": 0 00:09:16.020 }, 00:09:16.020 "claimed": true, 00:09:16.020 "claim_type": "exclusive_write", 00:09:16.020 "zoned": false, 00:09:16.020 "supported_io_types": { 00:09:16.020 "read": true, 00:09:16.020 "write": true, 00:09:16.020 "unmap": true, 00:09:16.020 "flush": true, 00:09:16.020 "reset": true, 00:09:16.020 "nvme_admin": false, 00:09:16.020 "nvme_io": false, 00:09:16.020 "nvme_io_md": false, 00:09:16.020 "write_zeroes": true, 00:09:16.020 "zcopy": true, 00:09:16.020 "get_zone_info": false, 00:09:16.020 "zone_management": false, 00:09:16.020 "zone_append": false, 00:09:16.020 "compare": false, 00:09:16.020 "compare_and_write": false, 00:09:16.020 "abort": true, 00:09:16.020 "seek_hole": false, 00:09:16.020 "seek_data": false, 00:09:16.020 "copy": true, 00:09:16.020 "nvme_iov_md": false 00:09:16.020 }, 00:09:16.020 "memory_domains": [ 00:09:16.020 { 00:09:16.020 "dma_device_id": "system", 00:09:16.020 "dma_device_type": 1 00:09:16.020 }, 00:09:16.020 { 00:09:16.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.020 "dma_device_type": 2 00:09:16.020 } 00:09:16.020 ], 00:09:16.020 "driver_specific": {} 00:09:16.020 } 00:09:16.020 ] 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:16.020 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.021 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.021 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.021 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.021 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:16.021 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.021 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.021 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.021 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.021 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.021 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.021 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.021 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.021 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.021 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.021 "name": "Existed_Raid", 00:09:16.021 "uuid": "20dec730-be50-42ca-b031-f7df211ac41e", 00:09:16.021 "strip_size_kb": 64, 00:09:16.021 "state": "configuring", 00:09:16.021 "raid_level": "raid0", 00:09:16.021 "superblock": true, 00:09:16.021 "num_base_bdevs": 4, 00:09:16.021 "num_base_bdevs_discovered": 2, 00:09:16.021 "num_base_bdevs_operational": 4, 00:09:16.021 "base_bdevs_list": [ 00:09:16.021 { 00:09:16.021 "name": "BaseBdev1", 00:09:16.021 "uuid": "8f05f7e8-97e6-4446-93a6-8768175046e9", 00:09:16.021 "is_configured": true, 00:09:16.021 "data_offset": 2048, 00:09:16.021 "data_size": 63488 00:09:16.021 }, 00:09:16.021 { 00:09:16.021 "name": "BaseBdev2", 00:09:16.021 "uuid": "9af3d723-6cda-4eae-b1e7-c9647c794c74", 00:09:16.021 "is_configured": true, 00:09:16.021 "data_offset": 2048, 00:09:16.021 "data_size": 63488 00:09:16.021 }, 00:09:16.021 { 00:09:16.021 "name": "BaseBdev3", 00:09:16.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.021 "is_configured": false, 00:09:16.021 "data_offset": 0, 00:09:16.021 "data_size": 0 00:09:16.021 }, 00:09:16.021 { 00:09:16.021 "name": "BaseBdev4", 00:09:16.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.021 "is_configured": false, 00:09:16.021 "data_offset": 0, 00:09:16.021 "data_size": 0 00:09:16.021 } 00:09:16.021 ] 00:09:16.021 }' 00:09:16.021 04:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.021 04:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.281 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:16.281 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.281 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.281 [2024-12-14 04:58:27.150349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:16.281 BaseBdev3 00:09:16.281 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.281 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:16.281 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:16.281 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:16.281 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:16.281 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:16.281 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:16.281 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:16.281 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.281 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.541 [ 00:09:16.541 { 00:09:16.541 "name": "BaseBdev3", 00:09:16.541 "aliases": [ 00:09:16.541 "722e1e0d-cdb3-4c8e-bf5b-e8c6da3fcd9d" 00:09:16.541 ], 00:09:16.541 "product_name": "Malloc disk", 00:09:16.541 "block_size": 512, 00:09:16.541 "num_blocks": 65536, 00:09:16.541 "uuid": "722e1e0d-cdb3-4c8e-bf5b-e8c6da3fcd9d", 00:09:16.541 "assigned_rate_limits": { 00:09:16.541 "rw_ios_per_sec": 0, 00:09:16.541 "rw_mbytes_per_sec": 0, 00:09:16.541 "r_mbytes_per_sec": 0, 00:09:16.541 "w_mbytes_per_sec": 0 00:09:16.541 }, 00:09:16.541 "claimed": true, 00:09:16.541 "claim_type": "exclusive_write", 00:09:16.541 "zoned": false, 00:09:16.541 "supported_io_types": { 00:09:16.541 "read": true, 00:09:16.541 "write": true, 00:09:16.541 "unmap": true, 00:09:16.541 "flush": true, 00:09:16.541 "reset": true, 00:09:16.541 "nvme_admin": false, 00:09:16.541 "nvme_io": false, 00:09:16.541 "nvme_io_md": false, 00:09:16.541 "write_zeroes": true, 00:09:16.541 "zcopy": true, 00:09:16.541 "get_zone_info": false, 00:09:16.541 "zone_management": false, 00:09:16.541 "zone_append": false, 00:09:16.541 "compare": false, 00:09:16.541 "compare_and_write": false, 00:09:16.541 "abort": true, 00:09:16.541 "seek_hole": false, 00:09:16.541 "seek_data": false, 00:09:16.541 "copy": true, 00:09:16.541 "nvme_iov_md": false 00:09:16.541 }, 00:09:16.541 "memory_domains": [ 00:09:16.541 { 00:09:16.541 "dma_device_id": "system", 00:09:16.541 "dma_device_type": 1 00:09:16.541 }, 00:09:16.541 { 00:09:16.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.541 "dma_device_type": 2 00:09:16.541 } 00:09:16.541 ], 00:09:16.541 "driver_specific": {} 00:09:16.541 } 00:09:16.541 ] 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.541 "name": "Existed_Raid", 00:09:16.541 "uuid": "20dec730-be50-42ca-b031-f7df211ac41e", 00:09:16.541 "strip_size_kb": 64, 00:09:16.541 "state": "configuring", 00:09:16.541 "raid_level": "raid0", 00:09:16.541 "superblock": true, 00:09:16.541 "num_base_bdevs": 4, 00:09:16.541 "num_base_bdevs_discovered": 3, 00:09:16.541 "num_base_bdevs_operational": 4, 00:09:16.541 "base_bdevs_list": [ 00:09:16.541 { 00:09:16.541 "name": "BaseBdev1", 00:09:16.541 "uuid": "8f05f7e8-97e6-4446-93a6-8768175046e9", 00:09:16.541 "is_configured": true, 00:09:16.541 "data_offset": 2048, 00:09:16.541 "data_size": 63488 00:09:16.541 }, 00:09:16.541 { 00:09:16.541 "name": "BaseBdev2", 00:09:16.541 "uuid": "9af3d723-6cda-4eae-b1e7-c9647c794c74", 00:09:16.541 "is_configured": true, 00:09:16.541 "data_offset": 2048, 00:09:16.541 "data_size": 63488 00:09:16.541 }, 00:09:16.541 { 00:09:16.541 "name": "BaseBdev3", 00:09:16.541 "uuid": "722e1e0d-cdb3-4c8e-bf5b-e8c6da3fcd9d", 00:09:16.541 "is_configured": true, 00:09:16.541 "data_offset": 2048, 00:09:16.541 "data_size": 63488 00:09:16.541 }, 00:09:16.541 { 00:09:16.541 "name": "BaseBdev4", 00:09:16.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.541 "is_configured": false, 00:09:16.541 "data_offset": 0, 00:09:16.541 "data_size": 0 00:09:16.541 } 00:09:16.541 ] 00:09:16.541 }' 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.541 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.802 [2024-12-14 04:58:27.624532] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:16.802 [2024-12-14 04:58:27.624729] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:16.802 [2024-12-14 04:58:27.624758] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:16.802 BaseBdev4 00:09:16.802 [2024-12-14 04:58:27.625024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:16.802 [2024-12-14 04:58:27.625186] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:16.802 [2024-12-14 04:58:27.625208] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:16.802 [2024-12-14 04:58:27.625325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.802 [ 00:09:16.802 { 00:09:16.802 "name": "BaseBdev4", 00:09:16.802 "aliases": [ 00:09:16.802 "5d30fc01-70f2-4269-8cb6-6b1c2bc11cb3" 00:09:16.802 ], 00:09:16.802 "product_name": "Malloc disk", 00:09:16.802 "block_size": 512, 00:09:16.802 "num_blocks": 65536, 00:09:16.802 "uuid": "5d30fc01-70f2-4269-8cb6-6b1c2bc11cb3", 00:09:16.802 "assigned_rate_limits": { 00:09:16.802 "rw_ios_per_sec": 0, 00:09:16.802 "rw_mbytes_per_sec": 0, 00:09:16.802 "r_mbytes_per_sec": 0, 00:09:16.802 "w_mbytes_per_sec": 0 00:09:16.802 }, 00:09:16.802 "claimed": true, 00:09:16.802 "claim_type": "exclusive_write", 00:09:16.802 "zoned": false, 00:09:16.802 "supported_io_types": { 00:09:16.802 "read": true, 00:09:16.802 "write": true, 00:09:16.802 "unmap": true, 00:09:16.802 "flush": true, 00:09:16.802 "reset": true, 00:09:16.802 "nvme_admin": false, 00:09:16.802 "nvme_io": false, 00:09:16.802 "nvme_io_md": false, 00:09:16.802 "write_zeroes": true, 00:09:16.802 "zcopy": true, 00:09:16.802 "get_zone_info": false, 00:09:16.802 "zone_management": false, 00:09:16.802 "zone_append": false, 00:09:16.802 "compare": false, 00:09:16.802 "compare_and_write": false, 00:09:16.802 "abort": true, 00:09:16.802 "seek_hole": false, 00:09:16.802 "seek_data": false, 00:09:16.802 "copy": true, 00:09:16.802 "nvme_iov_md": false 00:09:16.802 }, 00:09:16.802 "memory_domains": [ 00:09:16.802 { 00:09:16.802 "dma_device_id": "system", 00:09:16.802 "dma_device_type": 1 00:09:16.802 }, 00:09:16.802 { 00:09:16.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.802 "dma_device_type": 2 00:09:16.802 } 00:09:16.802 ], 00:09:16.802 "driver_specific": {} 00:09:16.802 } 00:09:16.802 ] 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.802 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.062 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.062 "name": "Existed_Raid", 00:09:17.062 "uuid": "20dec730-be50-42ca-b031-f7df211ac41e", 00:09:17.062 "strip_size_kb": 64, 00:09:17.062 "state": "online", 00:09:17.062 "raid_level": "raid0", 00:09:17.062 "superblock": true, 00:09:17.062 "num_base_bdevs": 4, 00:09:17.062 "num_base_bdevs_discovered": 4, 00:09:17.062 "num_base_bdevs_operational": 4, 00:09:17.062 "base_bdevs_list": [ 00:09:17.062 { 00:09:17.062 "name": "BaseBdev1", 00:09:17.062 "uuid": "8f05f7e8-97e6-4446-93a6-8768175046e9", 00:09:17.062 "is_configured": true, 00:09:17.062 "data_offset": 2048, 00:09:17.062 "data_size": 63488 00:09:17.062 }, 00:09:17.062 { 00:09:17.062 "name": "BaseBdev2", 00:09:17.062 "uuid": "9af3d723-6cda-4eae-b1e7-c9647c794c74", 00:09:17.062 "is_configured": true, 00:09:17.062 "data_offset": 2048, 00:09:17.062 "data_size": 63488 00:09:17.062 }, 00:09:17.062 { 00:09:17.062 "name": "BaseBdev3", 00:09:17.062 "uuid": "722e1e0d-cdb3-4c8e-bf5b-e8c6da3fcd9d", 00:09:17.062 "is_configured": true, 00:09:17.062 "data_offset": 2048, 00:09:17.062 "data_size": 63488 00:09:17.062 }, 00:09:17.062 { 00:09:17.062 "name": "BaseBdev4", 00:09:17.062 "uuid": "5d30fc01-70f2-4269-8cb6-6b1c2bc11cb3", 00:09:17.062 "is_configured": true, 00:09:17.062 "data_offset": 2048, 00:09:17.062 "data_size": 63488 00:09:17.062 } 00:09:17.062 ] 00:09:17.062 }' 00:09:17.062 04:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.062 04:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.322 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:17.322 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:17.322 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:17.322 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:17.322 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:17.322 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:17.322 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:17.322 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:17.322 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.322 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.322 [2024-12-14 04:58:28.116028] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.322 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.322 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:17.322 "name": "Existed_Raid", 00:09:17.322 "aliases": [ 00:09:17.322 "20dec730-be50-42ca-b031-f7df211ac41e" 00:09:17.322 ], 00:09:17.322 "product_name": "Raid Volume", 00:09:17.322 "block_size": 512, 00:09:17.322 "num_blocks": 253952, 00:09:17.322 "uuid": "20dec730-be50-42ca-b031-f7df211ac41e", 00:09:17.322 "assigned_rate_limits": { 00:09:17.322 "rw_ios_per_sec": 0, 00:09:17.322 "rw_mbytes_per_sec": 0, 00:09:17.322 "r_mbytes_per_sec": 0, 00:09:17.322 "w_mbytes_per_sec": 0 00:09:17.322 }, 00:09:17.322 "claimed": false, 00:09:17.322 "zoned": false, 00:09:17.322 "supported_io_types": { 00:09:17.322 "read": true, 00:09:17.322 "write": true, 00:09:17.322 "unmap": true, 00:09:17.322 "flush": true, 00:09:17.322 "reset": true, 00:09:17.322 "nvme_admin": false, 00:09:17.322 "nvme_io": false, 00:09:17.322 "nvme_io_md": false, 00:09:17.322 "write_zeroes": true, 00:09:17.322 "zcopy": false, 00:09:17.322 "get_zone_info": false, 00:09:17.322 "zone_management": false, 00:09:17.322 "zone_append": false, 00:09:17.322 "compare": false, 00:09:17.322 "compare_and_write": false, 00:09:17.322 "abort": false, 00:09:17.322 "seek_hole": false, 00:09:17.322 "seek_data": false, 00:09:17.322 "copy": false, 00:09:17.322 "nvme_iov_md": false 00:09:17.322 }, 00:09:17.322 "memory_domains": [ 00:09:17.322 { 00:09:17.322 "dma_device_id": "system", 00:09:17.322 "dma_device_type": 1 00:09:17.322 }, 00:09:17.322 { 00:09:17.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.322 "dma_device_type": 2 00:09:17.322 }, 00:09:17.322 { 00:09:17.322 "dma_device_id": "system", 00:09:17.322 "dma_device_type": 1 00:09:17.322 }, 00:09:17.322 { 00:09:17.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.322 "dma_device_type": 2 00:09:17.322 }, 00:09:17.322 { 00:09:17.322 "dma_device_id": "system", 00:09:17.322 "dma_device_type": 1 00:09:17.322 }, 00:09:17.322 { 00:09:17.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.322 "dma_device_type": 2 00:09:17.322 }, 00:09:17.322 { 00:09:17.322 "dma_device_id": "system", 00:09:17.322 "dma_device_type": 1 00:09:17.322 }, 00:09:17.322 { 00:09:17.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.322 "dma_device_type": 2 00:09:17.322 } 00:09:17.322 ], 00:09:17.322 "driver_specific": { 00:09:17.322 "raid": { 00:09:17.322 "uuid": "20dec730-be50-42ca-b031-f7df211ac41e", 00:09:17.322 "strip_size_kb": 64, 00:09:17.322 "state": "online", 00:09:17.322 "raid_level": "raid0", 00:09:17.322 "superblock": true, 00:09:17.322 "num_base_bdevs": 4, 00:09:17.322 "num_base_bdevs_discovered": 4, 00:09:17.322 "num_base_bdevs_operational": 4, 00:09:17.322 "base_bdevs_list": [ 00:09:17.322 { 00:09:17.322 "name": "BaseBdev1", 00:09:17.322 "uuid": "8f05f7e8-97e6-4446-93a6-8768175046e9", 00:09:17.322 "is_configured": true, 00:09:17.322 "data_offset": 2048, 00:09:17.322 "data_size": 63488 00:09:17.322 }, 00:09:17.322 { 00:09:17.322 "name": "BaseBdev2", 00:09:17.322 "uuid": "9af3d723-6cda-4eae-b1e7-c9647c794c74", 00:09:17.322 "is_configured": true, 00:09:17.322 "data_offset": 2048, 00:09:17.322 "data_size": 63488 00:09:17.322 }, 00:09:17.322 { 00:09:17.322 "name": "BaseBdev3", 00:09:17.322 "uuid": "722e1e0d-cdb3-4c8e-bf5b-e8c6da3fcd9d", 00:09:17.322 "is_configured": true, 00:09:17.322 "data_offset": 2048, 00:09:17.322 "data_size": 63488 00:09:17.322 }, 00:09:17.322 { 00:09:17.322 "name": "BaseBdev4", 00:09:17.322 "uuid": "5d30fc01-70f2-4269-8cb6-6b1c2bc11cb3", 00:09:17.322 "is_configured": true, 00:09:17.322 "data_offset": 2048, 00:09:17.322 "data_size": 63488 00:09:17.322 } 00:09:17.322 ] 00:09:17.322 } 00:09:17.322 } 00:09:17.322 }' 00:09:17.322 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:17.322 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:17.322 BaseBdev2 00:09:17.322 BaseBdev3 00:09:17.322 BaseBdev4' 00:09:17.322 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.582 [2024-12-14 04:58:28.423246] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:17.582 [2024-12-14 04:58:28.423275] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.582 [2024-12-14 04:58:28.423327] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.582 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.842 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.842 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.842 "name": "Existed_Raid", 00:09:17.842 "uuid": "20dec730-be50-42ca-b031-f7df211ac41e", 00:09:17.842 "strip_size_kb": 64, 00:09:17.842 "state": "offline", 00:09:17.842 "raid_level": "raid0", 00:09:17.842 "superblock": true, 00:09:17.842 "num_base_bdevs": 4, 00:09:17.842 "num_base_bdevs_discovered": 3, 00:09:17.842 "num_base_bdevs_operational": 3, 00:09:17.842 "base_bdevs_list": [ 00:09:17.842 { 00:09:17.842 "name": null, 00:09:17.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.842 "is_configured": false, 00:09:17.842 "data_offset": 0, 00:09:17.842 "data_size": 63488 00:09:17.842 }, 00:09:17.842 { 00:09:17.842 "name": "BaseBdev2", 00:09:17.842 "uuid": "9af3d723-6cda-4eae-b1e7-c9647c794c74", 00:09:17.842 "is_configured": true, 00:09:17.842 "data_offset": 2048, 00:09:17.842 "data_size": 63488 00:09:17.842 }, 00:09:17.842 { 00:09:17.842 "name": "BaseBdev3", 00:09:17.842 "uuid": "722e1e0d-cdb3-4c8e-bf5b-e8c6da3fcd9d", 00:09:17.842 "is_configured": true, 00:09:17.842 "data_offset": 2048, 00:09:17.842 "data_size": 63488 00:09:17.842 }, 00:09:17.842 { 00:09:17.842 "name": "BaseBdev4", 00:09:17.842 "uuid": "5d30fc01-70f2-4269-8cb6-6b1c2bc11cb3", 00:09:17.842 "is_configured": true, 00:09:17.842 "data_offset": 2048, 00:09:17.842 "data_size": 63488 00:09:17.842 } 00:09:17.842 ] 00:09:17.842 }' 00:09:17.842 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.842 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.102 [2024-12-14 04:58:28.913574] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.102 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.102 [2024-12-14 04:58:28.980684] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:18.362 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.362 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:18.362 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:18.362 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.362 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.362 04:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.362 04:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.362 [2024-12-14 04:58:29.051507] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:18.362 [2024-12-14 04:58:29.051575] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.362 BaseBdev2 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:18.362 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.363 [ 00:09:18.363 { 00:09:18.363 "name": "BaseBdev2", 00:09:18.363 "aliases": [ 00:09:18.363 "35ef303a-c0c9-4066-bf29-e70065efb095" 00:09:18.363 ], 00:09:18.363 "product_name": "Malloc disk", 00:09:18.363 "block_size": 512, 00:09:18.363 "num_blocks": 65536, 00:09:18.363 "uuid": "35ef303a-c0c9-4066-bf29-e70065efb095", 00:09:18.363 "assigned_rate_limits": { 00:09:18.363 "rw_ios_per_sec": 0, 00:09:18.363 "rw_mbytes_per_sec": 0, 00:09:18.363 "r_mbytes_per_sec": 0, 00:09:18.363 "w_mbytes_per_sec": 0 00:09:18.363 }, 00:09:18.363 "claimed": false, 00:09:18.363 "zoned": false, 00:09:18.363 "supported_io_types": { 00:09:18.363 "read": true, 00:09:18.363 "write": true, 00:09:18.363 "unmap": true, 00:09:18.363 "flush": true, 00:09:18.363 "reset": true, 00:09:18.363 "nvme_admin": false, 00:09:18.363 "nvme_io": false, 00:09:18.363 "nvme_io_md": false, 00:09:18.363 "write_zeroes": true, 00:09:18.363 "zcopy": true, 00:09:18.363 "get_zone_info": false, 00:09:18.363 "zone_management": false, 00:09:18.363 "zone_append": false, 00:09:18.363 "compare": false, 00:09:18.363 "compare_and_write": false, 00:09:18.363 "abort": true, 00:09:18.363 "seek_hole": false, 00:09:18.363 "seek_data": false, 00:09:18.363 "copy": true, 00:09:18.363 "nvme_iov_md": false 00:09:18.363 }, 00:09:18.363 "memory_domains": [ 00:09:18.363 { 00:09:18.363 "dma_device_id": "system", 00:09:18.363 "dma_device_type": 1 00:09:18.363 }, 00:09:18.363 { 00:09:18.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.363 "dma_device_type": 2 00:09:18.363 } 00:09:18.363 ], 00:09:18.363 "driver_specific": {} 00:09:18.363 } 00:09:18.363 ] 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.363 BaseBdev3 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.363 [ 00:09:18.363 { 00:09:18.363 "name": "BaseBdev3", 00:09:18.363 "aliases": [ 00:09:18.363 "4866d320-cc5e-4a6b-b22e-5751b908b60a" 00:09:18.363 ], 00:09:18.363 "product_name": "Malloc disk", 00:09:18.363 "block_size": 512, 00:09:18.363 "num_blocks": 65536, 00:09:18.363 "uuid": "4866d320-cc5e-4a6b-b22e-5751b908b60a", 00:09:18.363 "assigned_rate_limits": { 00:09:18.363 "rw_ios_per_sec": 0, 00:09:18.363 "rw_mbytes_per_sec": 0, 00:09:18.363 "r_mbytes_per_sec": 0, 00:09:18.363 "w_mbytes_per_sec": 0 00:09:18.363 }, 00:09:18.363 "claimed": false, 00:09:18.363 "zoned": false, 00:09:18.363 "supported_io_types": { 00:09:18.363 "read": true, 00:09:18.363 "write": true, 00:09:18.363 "unmap": true, 00:09:18.363 "flush": true, 00:09:18.363 "reset": true, 00:09:18.363 "nvme_admin": false, 00:09:18.363 "nvme_io": false, 00:09:18.363 "nvme_io_md": false, 00:09:18.363 "write_zeroes": true, 00:09:18.363 "zcopy": true, 00:09:18.363 "get_zone_info": false, 00:09:18.363 "zone_management": false, 00:09:18.363 "zone_append": false, 00:09:18.363 "compare": false, 00:09:18.363 "compare_and_write": false, 00:09:18.363 "abort": true, 00:09:18.363 "seek_hole": false, 00:09:18.363 "seek_data": false, 00:09:18.363 "copy": true, 00:09:18.363 "nvme_iov_md": false 00:09:18.363 }, 00:09:18.363 "memory_domains": [ 00:09:18.363 { 00:09:18.363 "dma_device_id": "system", 00:09:18.363 "dma_device_type": 1 00:09:18.363 }, 00:09:18.363 { 00:09:18.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.363 "dma_device_type": 2 00:09:18.363 } 00:09:18.363 ], 00:09:18.363 "driver_specific": {} 00:09:18.363 } 00:09:18.363 ] 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.363 BaseBdev4 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.363 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.622 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.622 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:18.622 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.622 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.622 [ 00:09:18.622 { 00:09:18.622 "name": "BaseBdev4", 00:09:18.622 "aliases": [ 00:09:18.622 "8f1915f3-102b-44da-91d9-399bda0cf613" 00:09:18.622 ], 00:09:18.622 "product_name": "Malloc disk", 00:09:18.622 "block_size": 512, 00:09:18.622 "num_blocks": 65536, 00:09:18.622 "uuid": "8f1915f3-102b-44da-91d9-399bda0cf613", 00:09:18.622 "assigned_rate_limits": { 00:09:18.622 "rw_ios_per_sec": 0, 00:09:18.622 "rw_mbytes_per_sec": 0, 00:09:18.622 "r_mbytes_per_sec": 0, 00:09:18.622 "w_mbytes_per_sec": 0 00:09:18.622 }, 00:09:18.622 "claimed": false, 00:09:18.622 "zoned": false, 00:09:18.622 "supported_io_types": { 00:09:18.622 "read": true, 00:09:18.622 "write": true, 00:09:18.623 "unmap": true, 00:09:18.623 "flush": true, 00:09:18.623 "reset": true, 00:09:18.623 "nvme_admin": false, 00:09:18.623 "nvme_io": false, 00:09:18.623 "nvme_io_md": false, 00:09:18.623 "write_zeroes": true, 00:09:18.623 "zcopy": true, 00:09:18.623 "get_zone_info": false, 00:09:18.623 "zone_management": false, 00:09:18.623 "zone_append": false, 00:09:18.623 "compare": false, 00:09:18.623 "compare_and_write": false, 00:09:18.623 "abort": true, 00:09:18.623 "seek_hole": false, 00:09:18.623 "seek_data": false, 00:09:18.623 "copy": true, 00:09:18.623 "nvme_iov_md": false 00:09:18.623 }, 00:09:18.623 "memory_domains": [ 00:09:18.623 { 00:09:18.623 "dma_device_id": "system", 00:09:18.623 "dma_device_type": 1 00:09:18.623 }, 00:09:18.623 { 00:09:18.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.623 "dma_device_type": 2 00:09:18.623 } 00:09:18.623 ], 00:09:18.623 "driver_specific": {} 00:09:18.623 } 00:09:18.623 ] 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.623 [2024-12-14 04:58:29.274538] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:18.623 [2024-12-14 04:58:29.274581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:18.623 [2024-12-14 04:58:29.274600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:18.623 [2024-12-14 04:58:29.276396] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:18.623 [2024-12-14 04:58:29.276450] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.623 "name": "Existed_Raid", 00:09:18.623 "uuid": "f9414554-f56d-4555-93b0-f33567878072", 00:09:18.623 "strip_size_kb": 64, 00:09:18.623 "state": "configuring", 00:09:18.623 "raid_level": "raid0", 00:09:18.623 "superblock": true, 00:09:18.623 "num_base_bdevs": 4, 00:09:18.623 "num_base_bdevs_discovered": 3, 00:09:18.623 "num_base_bdevs_operational": 4, 00:09:18.623 "base_bdevs_list": [ 00:09:18.623 { 00:09:18.623 "name": "BaseBdev1", 00:09:18.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.623 "is_configured": false, 00:09:18.623 "data_offset": 0, 00:09:18.623 "data_size": 0 00:09:18.623 }, 00:09:18.623 { 00:09:18.623 "name": "BaseBdev2", 00:09:18.623 "uuid": "35ef303a-c0c9-4066-bf29-e70065efb095", 00:09:18.623 "is_configured": true, 00:09:18.623 "data_offset": 2048, 00:09:18.623 "data_size": 63488 00:09:18.623 }, 00:09:18.623 { 00:09:18.623 "name": "BaseBdev3", 00:09:18.623 "uuid": "4866d320-cc5e-4a6b-b22e-5751b908b60a", 00:09:18.623 "is_configured": true, 00:09:18.623 "data_offset": 2048, 00:09:18.623 "data_size": 63488 00:09:18.623 }, 00:09:18.623 { 00:09:18.623 "name": "BaseBdev4", 00:09:18.623 "uuid": "8f1915f3-102b-44da-91d9-399bda0cf613", 00:09:18.623 "is_configured": true, 00:09:18.623 "data_offset": 2048, 00:09:18.623 "data_size": 63488 00:09:18.623 } 00:09:18.623 ] 00:09:18.623 }' 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.623 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.883 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:18.883 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.883 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.883 [2024-12-14 04:58:29.709768] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:18.883 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.883 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:18.883 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.883 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.883 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.883 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.883 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:18.883 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.883 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.883 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.883 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.883 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.883 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.883 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.883 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.883 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.142 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.142 "name": "Existed_Raid", 00:09:19.142 "uuid": "f9414554-f56d-4555-93b0-f33567878072", 00:09:19.142 "strip_size_kb": 64, 00:09:19.142 "state": "configuring", 00:09:19.142 "raid_level": "raid0", 00:09:19.142 "superblock": true, 00:09:19.142 "num_base_bdevs": 4, 00:09:19.142 "num_base_bdevs_discovered": 2, 00:09:19.142 "num_base_bdevs_operational": 4, 00:09:19.142 "base_bdevs_list": [ 00:09:19.142 { 00:09:19.142 "name": "BaseBdev1", 00:09:19.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.142 "is_configured": false, 00:09:19.142 "data_offset": 0, 00:09:19.142 "data_size": 0 00:09:19.142 }, 00:09:19.142 { 00:09:19.142 "name": null, 00:09:19.142 "uuid": "35ef303a-c0c9-4066-bf29-e70065efb095", 00:09:19.142 "is_configured": false, 00:09:19.142 "data_offset": 0, 00:09:19.142 "data_size": 63488 00:09:19.142 }, 00:09:19.142 { 00:09:19.142 "name": "BaseBdev3", 00:09:19.142 "uuid": "4866d320-cc5e-4a6b-b22e-5751b908b60a", 00:09:19.142 "is_configured": true, 00:09:19.142 "data_offset": 2048, 00:09:19.142 "data_size": 63488 00:09:19.142 }, 00:09:19.142 { 00:09:19.142 "name": "BaseBdev4", 00:09:19.142 "uuid": "8f1915f3-102b-44da-91d9-399bda0cf613", 00:09:19.142 "is_configured": true, 00:09:19.142 "data_offset": 2048, 00:09:19.142 "data_size": 63488 00:09:19.142 } 00:09:19.142 ] 00:09:19.142 }' 00:09:19.142 04:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.142 04:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.402 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.402 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:19.402 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.402 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.402 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.402 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:19.402 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:19.402 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.402 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.402 [2024-12-14 04:58:30.171842] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:19.402 BaseBdev1 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.403 [ 00:09:19.403 { 00:09:19.403 "name": "BaseBdev1", 00:09:19.403 "aliases": [ 00:09:19.403 "475e151a-3d05-448c-8b2d-025e3e63cc79" 00:09:19.403 ], 00:09:19.403 "product_name": "Malloc disk", 00:09:19.403 "block_size": 512, 00:09:19.403 "num_blocks": 65536, 00:09:19.403 "uuid": "475e151a-3d05-448c-8b2d-025e3e63cc79", 00:09:19.403 "assigned_rate_limits": { 00:09:19.403 "rw_ios_per_sec": 0, 00:09:19.403 "rw_mbytes_per_sec": 0, 00:09:19.403 "r_mbytes_per_sec": 0, 00:09:19.403 "w_mbytes_per_sec": 0 00:09:19.403 }, 00:09:19.403 "claimed": true, 00:09:19.403 "claim_type": "exclusive_write", 00:09:19.403 "zoned": false, 00:09:19.403 "supported_io_types": { 00:09:19.403 "read": true, 00:09:19.403 "write": true, 00:09:19.403 "unmap": true, 00:09:19.403 "flush": true, 00:09:19.403 "reset": true, 00:09:19.403 "nvme_admin": false, 00:09:19.403 "nvme_io": false, 00:09:19.403 "nvme_io_md": false, 00:09:19.403 "write_zeroes": true, 00:09:19.403 "zcopy": true, 00:09:19.403 "get_zone_info": false, 00:09:19.403 "zone_management": false, 00:09:19.403 "zone_append": false, 00:09:19.403 "compare": false, 00:09:19.403 "compare_and_write": false, 00:09:19.403 "abort": true, 00:09:19.403 "seek_hole": false, 00:09:19.403 "seek_data": false, 00:09:19.403 "copy": true, 00:09:19.403 "nvme_iov_md": false 00:09:19.403 }, 00:09:19.403 "memory_domains": [ 00:09:19.403 { 00:09:19.403 "dma_device_id": "system", 00:09:19.403 "dma_device_type": 1 00:09:19.403 }, 00:09:19.403 { 00:09:19.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.403 "dma_device_type": 2 00:09:19.403 } 00:09:19.403 ], 00:09:19.403 "driver_specific": {} 00:09:19.403 } 00:09:19.403 ] 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.403 "name": "Existed_Raid", 00:09:19.403 "uuid": "f9414554-f56d-4555-93b0-f33567878072", 00:09:19.403 "strip_size_kb": 64, 00:09:19.403 "state": "configuring", 00:09:19.403 "raid_level": "raid0", 00:09:19.403 "superblock": true, 00:09:19.403 "num_base_bdevs": 4, 00:09:19.403 "num_base_bdevs_discovered": 3, 00:09:19.403 "num_base_bdevs_operational": 4, 00:09:19.403 "base_bdevs_list": [ 00:09:19.403 { 00:09:19.403 "name": "BaseBdev1", 00:09:19.403 "uuid": "475e151a-3d05-448c-8b2d-025e3e63cc79", 00:09:19.403 "is_configured": true, 00:09:19.403 "data_offset": 2048, 00:09:19.403 "data_size": 63488 00:09:19.403 }, 00:09:19.403 { 00:09:19.403 "name": null, 00:09:19.403 "uuid": "35ef303a-c0c9-4066-bf29-e70065efb095", 00:09:19.403 "is_configured": false, 00:09:19.403 "data_offset": 0, 00:09:19.403 "data_size": 63488 00:09:19.403 }, 00:09:19.403 { 00:09:19.403 "name": "BaseBdev3", 00:09:19.403 "uuid": "4866d320-cc5e-4a6b-b22e-5751b908b60a", 00:09:19.403 "is_configured": true, 00:09:19.403 "data_offset": 2048, 00:09:19.403 "data_size": 63488 00:09:19.403 }, 00:09:19.403 { 00:09:19.403 "name": "BaseBdev4", 00:09:19.403 "uuid": "8f1915f3-102b-44da-91d9-399bda0cf613", 00:09:19.403 "is_configured": true, 00:09:19.403 "data_offset": 2048, 00:09:19.403 "data_size": 63488 00:09:19.403 } 00:09:19.403 ] 00:09:19.403 }' 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.403 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.973 [2024-12-14 04:58:30.730921] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.973 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.973 "name": "Existed_Raid", 00:09:19.973 "uuid": "f9414554-f56d-4555-93b0-f33567878072", 00:09:19.973 "strip_size_kb": 64, 00:09:19.973 "state": "configuring", 00:09:19.973 "raid_level": "raid0", 00:09:19.973 "superblock": true, 00:09:19.973 "num_base_bdevs": 4, 00:09:19.973 "num_base_bdevs_discovered": 2, 00:09:19.973 "num_base_bdevs_operational": 4, 00:09:19.973 "base_bdevs_list": [ 00:09:19.973 { 00:09:19.973 "name": "BaseBdev1", 00:09:19.973 "uuid": "475e151a-3d05-448c-8b2d-025e3e63cc79", 00:09:19.973 "is_configured": true, 00:09:19.973 "data_offset": 2048, 00:09:19.973 "data_size": 63488 00:09:19.973 }, 00:09:19.973 { 00:09:19.973 "name": null, 00:09:19.973 "uuid": "35ef303a-c0c9-4066-bf29-e70065efb095", 00:09:19.973 "is_configured": false, 00:09:19.973 "data_offset": 0, 00:09:19.973 "data_size": 63488 00:09:19.973 }, 00:09:19.973 { 00:09:19.974 "name": null, 00:09:19.974 "uuid": "4866d320-cc5e-4a6b-b22e-5751b908b60a", 00:09:19.974 "is_configured": false, 00:09:19.974 "data_offset": 0, 00:09:19.974 "data_size": 63488 00:09:19.974 }, 00:09:19.974 { 00:09:19.974 "name": "BaseBdev4", 00:09:19.974 "uuid": "8f1915f3-102b-44da-91d9-399bda0cf613", 00:09:19.974 "is_configured": true, 00:09:19.974 "data_offset": 2048, 00:09:19.974 "data_size": 63488 00:09:19.974 } 00:09:19.974 ] 00:09:19.974 }' 00:09:19.974 04:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.974 04:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.542 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.542 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:20.542 04:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.542 04:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.542 04:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.542 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:20.542 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:20.542 04:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.542 04:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.542 [2024-12-14 04:58:31.214178] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:20.542 04:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.543 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:20.543 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.543 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.543 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.543 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.543 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:20.543 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.543 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.543 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.543 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.543 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.543 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.543 04:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.543 04:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.543 04:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.543 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.543 "name": "Existed_Raid", 00:09:20.543 "uuid": "f9414554-f56d-4555-93b0-f33567878072", 00:09:20.543 "strip_size_kb": 64, 00:09:20.543 "state": "configuring", 00:09:20.543 "raid_level": "raid0", 00:09:20.543 "superblock": true, 00:09:20.543 "num_base_bdevs": 4, 00:09:20.543 "num_base_bdevs_discovered": 3, 00:09:20.543 "num_base_bdevs_operational": 4, 00:09:20.543 "base_bdevs_list": [ 00:09:20.543 { 00:09:20.543 "name": "BaseBdev1", 00:09:20.543 "uuid": "475e151a-3d05-448c-8b2d-025e3e63cc79", 00:09:20.543 "is_configured": true, 00:09:20.543 "data_offset": 2048, 00:09:20.543 "data_size": 63488 00:09:20.543 }, 00:09:20.543 { 00:09:20.543 "name": null, 00:09:20.543 "uuid": "35ef303a-c0c9-4066-bf29-e70065efb095", 00:09:20.543 "is_configured": false, 00:09:20.543 "data_offset": 0, 00:09:20.543 "data_size": 63488 00:09:20.543 }, 00:09:20.543 { 00:09:20.543 "name": "BaseBdev3", 00:09:20.543 "uuid": "4866d320-cc5e-4a6b-b22e-5751b908b60a", 00:09:20.543 "is_configured": true, 00:09:20.543 "data_offset": 2048, 00:09:20.543 "data_size": 63488 00:09:20.543 }, 00:09:20.543 { 00:09:20.543 "name": "BaseBdev4", 00:09:20.543 "uuid": "8f1915f3-102b-44da-91d9-399bda0cf613", 00:09:20.543 "is_configured": true, 00:09:20.543 "data_offset": 2048, 00:09:20.543 "data_size": 63488 00:09:20.543 } 00:09:20.543 ] 00:09:20.543 }' 00:09:20.543 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.543 04:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.802 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.802 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:20.802 04:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.802 04:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.802 04:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.062 [2024-12-14 04:58:31.697351] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.062 "name": "Existed_Raid", 00:09:21.062 "uuid": "f9414554-f56d-4555-93b0-f33567878072", 00:09:21.062 "strip_size_kb": 64, 00:09:21.062 "state": "configuring", 00:09:21.062 "raid_level": "raid0", 00:09:21.062 "superblock": true, 00:09:21.062 "num_base_bdevs": 4, 00:09:21.062 "num_base_bdevs_discovered": 2, 00:09:21.062 "num_base_bdevs_operational": 4, 00:09:21.062 "base_bdevs_list": [ 00:09:21.062 { 00:09:21.062 "name": null, 00:09:21.062 "uuid": "475e151a-3d05-448c-8b2d-025e3e63cc79", 00:09:21.062 "is_configured": false, 00:09:21.062 "data_offset": 0, 00:09:21.062 "data_size": 63488 00:09:21.062 }, 00:09:21.062 { 00:09:21.062 "name": null, 00:09:21.062 "uuid": "35ef303a-c0c9-4066-bf29-e70065efb095", 00:09:21.062 "is_configured": false, 00:09:21.062 "data_offset": 0, 00:09:21.062 "data_size": 63488 00:09:21.062 }, 00:09:21.062 { 00:09:21.062 "name": "BaseBdev3", 00:09:21.062 "uuid": "4866d320-cc5e-4a6b-b22e-5751b908b60a", 00:09:21.062 "is_configured": true, 00:09:21.062 "data_offset": 2048, 00:09:21.062 "data_size": 63488 00:09:21.062 }, 00:09:21.062 { 00:09:21.062 "name": "BaseBdev4", 00:09:21.062 "uuid": "8f1915f3-102b-44da-91d9-399bda0cf613", 00:09:21.062 "is_configured": true, 00:09:21.062 "data_offset": 2048, 00:09:21.062 "data_size": 63488 00:09:21.062 } 00:09:21.062 ] 00:09:21.062 }' 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.062 04:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.322 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.322 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.322 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.322 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:21.322 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.581 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:21.581 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:21.581 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.581 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.581 [2024-12-14 04:58:32.226984] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.581 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.582 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:21.582 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.582 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.582 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.582 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.582 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:21.582 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.582 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.582 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.582 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.582 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.582 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.582 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.582 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.582 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.582 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.582 "name": "Existed_Raid", 00:09:21.582 "uuid": "f9414554-f56d-4555-93b0-f33567878072", 00:09:21.582 "strip_size_kb": 64, 00:09:21.582 "state": "configuring", 00:09:21.582 "raid_level": "raid0", 00:09:21.582 "superblock": true, 00:09:21.582 "num_base_bdevs": 4, 00:09:21.582 "num_base_bdevs_discovered": 3, 00:09:21.582 "num_base_bdevs_operational": 4, 00:09:21.582 "base_bdevs_list": [ 00:09:21.582 { 00:09:21.582 "name": null, 00:09:21.582 "uuid": "475e151a-3d05-448c-8b2d-025e3e63cc79", 00:09:21.582 "is_configured": false, 00:09:21.582 "data_offset": 0, 00:09:21.582 "data_size": 63488 00:09:21.582 }, 00:09:21.582 { 00:09:21.582 "name": "BaseBdev2", 00:09:21.582 "uuid": "35ef303a-c0c9-4066-bf29-e70065efb095", 00:09:21.582 "is_configured": true, 00:09:21.582 "data_offset": 2048, 00:09:21.582 "data_size": 63488 00:09:21.582 }, 00:09:21.582 { 00:09:21.582 "name": "BaseBdev3", 00:09:21.582 "uuid": "4866d320-cc5e-4a6b-b22e-5751b908b60a", 00:09:21.582 "is_configured": true, 00:09:21.582 "data_offset": 2048, 00:09:21.582 "data_size": 63488 00:09:21.582 }, 00:09:21.582 { 00:09:21.582 "name": "BaseBdev4", 00:09:21.582 "uuid": "8f1915f3-102b-44da-91d9-399bda0cf613", 00:09:21.582 "is_configured": true, 00:09:21.582 "data_offset": 2048, 00:09:21.582 "data_size": 63488 00:09:21.582 } 00:09:21.582 ] 00:09:21.582 }' 00:09:21.582 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.582 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.842 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.842 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:21.842 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.842 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.842 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 475e151a-3d05-448c-8b2d-025e3e63cc79 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.101 [2024-12-14 04:58:32.812792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:22.101 [2024-12-14 04:58:32.812981] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:22.101 [2024-12-14 04:58:32.812993] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:22.101 [2024-12-14 04:58:32.813253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:22.101 [2024-12-14 04:58:32.813367] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:22.101 [2024-12-14 04:58:32.813378] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:22.101 NewBaseBdev 00:09:22.101 [2024-12-14 04:58:32.813471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.101 [ 00:09:22.101 { 00:09:22.101 "name": "NewBaseBdev", 00:09:22.101 "aliases": [ 00:09:22.101 "475e151a-3d05-448c-8b2d-025e3e63cc79" 00:09:22.101 ], 00:09:22.101 "product_name": "Malloc disk", 00:09:22.101 "block_size": 512, 00:09:22.101 "num_blocks": 65536, 00:09:22.101 "uuid": "475e151a-3d05-448c-8b2d-025e3e63cc79", 00:09:22.101 "assigned_rate_limits": { 00:09:22.101 "rw_ios_per_sec": 0, 00:09:22.101 "rw_mbytes_per_sec": 0, 00:09:22.101 "r_mbytes_per_sec": 0, 00:09:22.101 "w_mbytes_per_sec": 0 00:09:22.101 }, 00:09:22.101 "claimed": true, 00:09:22.101 "claim_type": "exclusive_write", 00:09:22.101 "zoned": false, 00:09:22.101 "supported_io_types": { 00:09:22.101 "read": true, 00:09:22.101 "write": true, 00:09:22.101 "unmap": true, 00:09:22.101 "flush": true, 00:09:22.101 "reset": true, 00:09:22.101 "nvme_admin": false, 00:09:22.101 "nvme_io": false, 00:09:22.101 "nvme_io_md": false, 00:09:22.101 "write_zeroes": true, 00:09:22.101 "zcopy": true, 00:09:22.101 "get_zone_info": false, 00:09:22.101 "zone_management": false, 00:09:22.101 "zone_append": false, 00:09:22.101 "compare": false, 00:09:22.101 "compare_and_write": false, 00:09:22.101 "abort": true, 00:09:22.101 "seek_hole": false, 00:09:22.101 "seek_data": false, 00:09:22.101 "copy": true, 00:09:22.101 "nvme_iov_md": false 00:09:22.101 }, 00:09:22.101 "memory_domains": [ 00:09:22.101 { 00:09:22.101 "dma_device_id": "system", 00:09:22.101 "dma_device_type": 1 00:09:22.101 }, 00:09:22.101 { 00:09:22.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.101 "dma_device_type": 2 00:09:22.101 } 00:09:22.101 ], 00:09:22.101 "driver_specific": {} 00:09:22.101 } 00:09:22.101 ] 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.101 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.101 "name": "Existed_Raid", 00:09:22.101 "uuid": "f9414554-f56d-4555-93b0-f33567878072", 00:09:22.101 "strip_size_kb": 64, 00:09:22.101 "state": "online", 00:09:22.101 "raid_level": "raid0", 00:09:22.101 "superblock": true, 00:09:22.101 "num_base_bdevs": 4, 00:09:22.102 "num_base_bdevs_discovered": 4, 00:09:22.102 "num_base_bdevs_operational": 4, 00:09:22.102 "base_bdevs_list": [ 00:09:22.102 { 00:09:22.102 "name": "NewBaseBdev", 00:09:22.102 "uuid": "475e151a-3d05-448c-8b2d-025e3e63cc79", 00:09:22.102 "is_configured": true, 00:09:22.102 "data_offset": 2048, 00:09:22.102 "data_size": 63488 00:09:22.102 }, 00:09:22.102 { 00:09:22.102 "name": "BaseBdev2", 00:09:22.102 "uuid": "35ef303a-c0c9-4066-bf29-e70065efb095", 00:09:22.102 "is_configured": true, 00:09:22.102 "data_offset": 2048, 00:09:22.102 "data_size": 63488 00:09:22.102 }, 00:09:22.102 { 00:09:22.102 "name": "BaseBdev3", 00:09:22.102 "uuid": "4866d320-cc5e-4a6b-b22e-5751b908b60a", 00:09:22.102 "is_configured": true, 00:09:22.102 "data_offset": 2048, 00:09:22.102 "data_size": 63488 00:09:22.102 }, 00:09:22.102 { 00:09:22.102 "name": "BaseBdev4", 00:09:22.102 "uuid": "8f1915f3-102b-44da-91d9-399bda0cf613", 00:09:22.102 "is_configured": true, 00:09:22.102 "data_offset": 2048, 00:09:22.102 "data_size": 63488 00:09:22.102 } 00:09:22.102 ] 00:09:22.102 }' 00:09:22.102 04:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.102 04:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.669 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:22.669 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:22.669 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:22.669 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:22.669 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:22.669 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:22.669 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:22.669 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:22.669 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.669 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.669 [2024-12-14 04:58:33.264360] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.669 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.669 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:22.669 "name": "Existed_Raid", 00:09:22.669 "aliases": [ 00:09:22.669 "f9414554-f56d-4555-93b0-f33567878072" 00:09:22.669 ], 00:09:22.669 "product_name": "Raid Volume", 00:09:22.669 "block_size": 512, 00:09:22.669 "num_blocks": 253952, 00:09:22.669 "uuid": "f9414554-f56d-4555-93b0-f33567878072", 00:09:22.669 "assigned_rate_limits": { 00:09:22.669 "rw_ios_per_sec": 0, 00:09:22.669 "rw_mbytes_per_sec": 0, 00:09:22.669 "r_mbytes_per_sec": 0, 00:09:22.669 "w_mbytes_per_sec": 0 00:09:22.669 }, 00:09:22.669 "claimed": false, 00:09:22.669 "zoned": false, 00:09:22.669 "supported_io_types": { 00:09:22.669 "read": true, 00:09:22.669 "write": true, 00:09:22.669 "unmap": true, 00:09:22.669 "flush": true, 00:09:22.669 "reset": true, 00:09:22.669 "nvme_admin": false, 00:09:22.669 "nvme_io": false, 00:09:22.669 "nvme_io_md": false, 00:09:22.669 "write_zeroes": true, 00:09:22.669 "zcopy": false, 00:09:22.669 "get_zone_info": false, 00:09:22.669 "zone_management": false, 00:09:22.669 "zone_append": false, 00:09:22.669 "compare": false, 00:09:22.669 "compare_and_write": false, 00:09:22.669 "abort": false, 00:09:22.669 "seek_hole": false, 00:09:22.669 "seek_data": false, 00:09:22.669 "copy": false, 00:09:22.669 "nvme_iov_md": false 00:09:22.669 }, 00:09:22.669 "memory_domains": [ 00:09:22.669 { 00:09:22.669 "dma_device_id": "system", 00:09:22.669 "dma_device_type": 1 00:09:22.669 }, 00:09:22.669 { 00:09:22.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.669 "dma_device_type": 2 00:09:22.669 }, 00:09:22.670 { 00:09:22.670 "dma_device_id": "system", 00:09:22.670 "dma_device_type": 1 00:09:22.670 }, 00:09:22.670 { 00:09:22.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.670 "dma_device_type": 2 00:09:22.670 }, 00:09:22.670 { 00:09:22.670 "dma_device_id": "system", 00:09:22.670 "dma_device_type": 1 00:09:22.670 }, 00:09:22.670 { 00:09:22.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.670 "dma_device_type": 2 00:09:22.670 }, 00:09:22.670 { 00:09:22.670 "dma_device_id": "system", 00:09:22.670 "dma_device_type": 1 00:09:22.670 }, 00:09:22.670 { 00:09:22.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.670 "dma_device_type": 2 00:09:22.670 } 00:09:22.670 ], 00:09:22.670 "driver_specific": { 00:09:22.670 "raid": { 00:09:22.670 "uuid": "f9414554-f56d-4555-93b0-f33567878072", 00:09:22.670 "strip_size_kb": 64, 00:09:22.670 "state": "online", 00:09:22.670 "raid_level": "raid0", 00:09:22.670 "superblock": true, 00:09:22.670 "num_base_bdevs": 4, 00:09:22.670 "num_base_bdevs_discovered": 4, 00:09:22.670 "num_base_bdevs_operational": 4, 00:09:22.670 "base_bdevs_list": [ 00:09:22.670 { 00:09:22.670 "name": "NewBaseBdev", 00:09:22.670 "uuid": "475e151a-3d05-448c-8b2d-025e3e63cc79", 00:09:22.670 "is_configured": true, 00:09:22.670 "data_offset": 2048, 00:09:22.670 "data_size": 63488 00:09:22.670 }, 00:09:22.670 { 00:09:22.670 "name": "BaseBdev2", 00:09:22.670 "uuid": "35ef303a-c0c9-4066-bf29-e70065efb095", 00:09:22.670 "is_configured": true, 00:09:22.670 "data_offset": 2048, 00:09:22.670 "data_size": 63488 00:09:22.670 }, 00:09:22.670 { 00:09:22.670 "name": "BaseBdev3", 00:09:22.670 "uuid": "4866d320-cc5e-4a6b-b22e-5751b908b60a", 00:09:22.670 "is_configured": true, 00:09:22.670 "data_offset": 2048, 00:09:22.670 "data_size": 63488 00:09:22.670 }, 00:09:22.670 { 00:09:22.670 "name": "BaseBdev4", 00:09:22.670 "uuid": "8f1915f3-102b-44da-91d9-399bda0cf613", 00:09:22.670 "is_configured": true, 00:09:22.670 "data_offset": 2048, 00:09:22.670 "data_size": 63488 00:09:22.670 } 00:09:22.670 ] 00:09:22.670 } 00:09:22.670 } 00:09:22.670 }' 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:22.670 BaseBdev2 00:09:22.670 BaseBdev3 00:09:22.670 BaseBdev4' 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.670 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.929 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.929 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:22.929 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.929 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.929 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.929 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.929 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.929 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:22.929 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.929 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.929 [2024-12-14 04:58:33.603442] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:22.929 [2024-12-14 04:58:33.603512] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:22.929 [2024-12-14 04:58:33.603589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.929 [2024-12-14 04:58:33.603653] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.929 [2024-12-14 04:58:33.603662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:22.929 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.929 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81014 00:09:22.929 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 81014 ']' 00:09:22.929 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 81014 00:09:22.929 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:22.929 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.929 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81014 00:09:22.929 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:22.929 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:22.929 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81014' 00:09:22.929 killing process with pid 81014 00:09:22.929 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 81014 00:09:22.929 [2024-12-14 04:58:33.639923] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:22.929 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 81014 00:09:22.929 [2024-12-14 04:58:33.680999] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:23.210 04:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:23.210 ************************************ 00:09:23.210 END TEST raid_state_function_test_sb 00:09:23.210 ************************************ 00:09:23.210 00:09:23.210 real 0m9.581s 00:09:23.210 user 0m16.528s 00:09:23.210 sys 0m1.850s 00:09:23.210 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.210 04:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.210 04:58:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:09:23.210 04:58:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:23.210 04:58:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.210 04:58:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:23.210 ************************************ 00:09:23.210 START TEST raid_superblock_test 00:09:23.210 ************************************ 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81668 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81668 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81668 ']' 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:23.210 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.483 [2024-12-14 04:58:34.088611] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:23.483 [2024-12-14 04:58:34.088817] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81668 ] 00:09:23.483 [2024-12-14 04:58:34.249261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.483 [2024-12-14 04:58:34.294392] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.483 [2024-12-14 04:58:34.336002] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.483 [2024-12-14 04:58:34.336126] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.052 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:24.052 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:24.052 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:24.052 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.052 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:24.052 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:24.052 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:24.052 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:24.052 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:24.052 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:24.052 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:24.052 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.052 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.052 malloc1 00:09:24.052 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.052 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:24.052 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.052 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.311 [2024-12-14 04:58:34.934592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:24.311 [2024-12-14 04:58:34.934711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.311 [2024-12-14 04:58:34.934769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:24.311 [2024-12-14 04:58:34.934813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.311 [2024-12-14 04:58:34.936992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.311 [2024-12-14 04:58:34.937069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:24.311 pt1 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.311 malloc2 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.311 [2024-12-14 04:58:34.983266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:24.311 [2024-12-14 04:58:34.983372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.311 [2024-12-14 04:58:34.983409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:24.311 [2024-12-14 04:58:34.983434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.311 [2024-12-14 04:58:34.988278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.311 [2024-12-14 04:58:34.988449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:24.311 pt2 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:24.311 04:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:24.312 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.312 04:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.312 malloc3 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.312 [2024-12-14 04:58:35.014323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:24.312 [2024-12-14 04:58:35.014426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.312 [2024-12-14 04:58:35.014461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:24.312 [2024-12-14 04:58:35.014494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.312 [2024-12-14 04:58:35.016604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.312 [2024-12-14 04:58:35.016691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:24.312 pt3 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.312 malloc4 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.312 [2024-12-14 04:58:35.046711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:24.312 [2024-12-14 04:58:35.046813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.312 [2024-12-14 04:58:35.046845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:24.312 [2024-12-14 04:58:35.046879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.312 [2024-12-14 04:58:35.048902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.312 [2024-12-14 04:58:35.048977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:24.312 pt4 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.312 [2024-12-14 04:58:35.058775] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:24.312 [2024-12-14 04:58:35.060587] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:24.312 [2024-12-14 04:58:35.060653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:24.312 [2024-12-14 04:58:35.060710] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:24.312 [2024-12-14 04:58:35.060849] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:24.312 [2024-12-14 04:58:35.060862] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:24.312 [2024-12-14 04:58:35.061078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:24.312 [2024-12-14 04:58:35.061245] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:24.312 [2024-12-14 04:58:35.061256] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:24.312 [2024-12-14 04:58:35.061378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.312 "name": "raid_bdev1", 00:09:24.312 "uuid": "fd5df327-b5e7-4c30-a961-23fc8dbfdee5", 00:09:24.312 "strip_size_kb": 64, 00:09:24.312 "state": "online", 00:09:24.312 "raid_level": "raid0", 00:09:24.312 "superblock": true, 00:09:24.312 "num_base_bdevs": 4, 00:09:24.312 "num_base_bdevs_discovered": 4, 00:09:24.312 "num_base_bdevs_operational": 4, 00:09:24.312 "base_bdevs_list": [ 00:09:24.312 { 00:09:24.312 "name": "pt1", 00:09:24.312 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:24.312 "is_configured": true, 00:09:24.312 "data_offset": 2048, 00:09:24.312 "data_size": 63488 00:09:24.312 }, 00:09:24.312 { 00:09:24.312 "name": "pt2", 00:09:24.312 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:24.312 "is_configured": true, 00:09:24.312 "data_offset": 2048, 00:09:24.312 "data_size": 63488 00:09:24.312 }, 00:09:24.312 { 00:09:24.312 "name": "pt3", 00:09:24.312 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:24.312 "is_configured": true, 00:09:24.312 "data_offset": 2048, 00:09:24.312 "data_size": 63488 00:09:24.312 }, 00:09:24.312 { 00:09:24.312 "name": "pt4", 00:09:24.312 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:24.312 "is_configured": true, 00:09:24.312 "data_offset": 2048, 00:09:24.312 "data_size": 63488 00:09:24.312 } 00:09:24.312 ] 00:09:24.312 }' 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.312 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.880 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:24.880 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:24.880 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:24.880 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:24.880 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:24.880 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:24.880 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:24.880 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:24.880 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.880 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.880 [2024-12-14 04:58:35.514272] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:24.880 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.880 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:24.880 "name": "raid_bdev1", 00:09:24.880 "aliases": [ 00:09:24.880 "fd5df327-b5e7-4c30-a961-23fc8dbfdee5" 00:09:24.880 ], 00:09:24.880 "product_name": "Raid Volume", 00:09:24.880 "block_size": 512, 00:09:24.880 "num_blocks": 253952, 00:09:24.880 "uuid": "fd5df327-b5e7-4c30-a961-23fc8dbfdee5", 00:09:24.880 "assigned_rate_limits": { 00:09:24.880 "rw_ios_per_sec": 0, 00:09:24.880 "rw_mbytes_per_sec": 0, 00:09:24.880 "r_mbytes_per_sec": 0, 00:09:24.880 "w_mbytes_per_sec": 0 00:09:24.880 }, 00:09:24.880 "claimed": false, 00:09:24.880 "zoned": false, 00:09:24.880 "supported_io_types": { 00:09:24.880 "read": true, 00:09:24.880 "write": true, 00:09:24.880 "unmap": true, 00:09:24.880 "flush": true, 00:09:24.880 "reset": true, 00:09:24.880 "nvme_admin": false, 00:09:24.880 "nvme_io": false, 00:09:24.880 "nvme_io_md": false, 00:09:24.880 "write_zeroes": true, 00:09:24.880 "zcopy": false, 00:09:24.880 "get_zone_info": false, 00:09:24.880 "zone_management": false, 00:09:24.880 "zone_append": false, 00:09:24.880 "compare": false, 00:09:24.880 "compare_and_write": false, 00:09:24.880 "abort": false, 00:09:24.880 "seek_hole": false, 00:09:24.880 "seek_data": false, 00:09:24.880 "copy": false, 00:09:24.880 "nvme_iov_md": false 00:09:24.880 }, 00:09:24.880 "memory_domains": [ 00:09:24.880 { 00:09:24.880 "dma_device_id": "system", 00:09:24.880 "dma_device_type": 1 00:09:24.880 }, 00:09:24.880 { 00:09:24.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.880 "dma_device_type": 2 00:09:24.880 }, 00:09:24.880 { 00:09:24.880 "dma_device_id": "system", 00:09:24.880 "dma_device_type": 1 00:09:24.880 }, 00:09:24.880 { 00:09:24.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.880 "dma_device_type": 2 00:09:24.880 }, 00:09:24.880 { 00:09:24.880 "dma_device_id": "system", 00:09:24.880 "dma_device_type": 1 00:09:24.880 }, 00:09:24.880 { 00:09:24.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.880 "dma_device_type": 2 00:09:24.880 }, 00:09:24.880 { 00:09:24.880 "dma_device_id": "system", 00:09:24.880 "dma_device_type": 1 00:09:24.880 }, 00:09:24.880 { 00:09:24.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.880 "dma_device_type": 2 00:09:24.880 } 00:09:24.880 ], 00:09:24.880 "driver_specific": { 00:09:24.880 "raid": { 00:09:24.880 "uuid": "fd5df327-b5e7-4c30-a961-23fc8dbfdee5", 00:09:24.880 "strip_size_kb": 64, 00:09:24.880 "state": "online", 00:09:24.880 "raid_level": "raid0", 00:09:24.880 "superblock": true, 00:09:24.880 "num_base_bdevs": 4, 00:09:24.880 "num_base_bdevs_discovered": 4, 00:09:24.880 "num_base_bdevs_operational": 4, 00:09:24.880 "base_bdevs_list": [ 00:09:24.880 { 00:09:24.880 "name": "pt1", 00:09:24.880 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:24.880 "is_configured": true, 00:09:24.880 "data_offset": 2048, 00:09:24.880 "data_size": 63488 00:09:24.880 }, 00:09:24.880 { 00:09:24.880 "name": "pt2", 00:09:24.880 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:24.880 "is_configured": true, 00:09:24.880 "data_offset": 2048, 00:09:24.880 "data_size": 63488 00:09:24.880 }, 00:09:24.880 { 00:09:24.880 "name": "pt3", 00:09:24.880 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:24.880 "is_configured": true, 00:09:24.880 "data_offset": 2048, 00:09:24.880 "data_size": 63488 00:09:24.880 }, 00:09:24.880 { 00:09:24.880 "name": "pt4", 00:09:24.880 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:24.880 "is_configured": true, 00:09:24.881 "data_offset": 2048, 00:09:24.881 "data_size": 63488 00:09:24.881 } 00:09:24.881 ] 00:09:24.881 } 00:09:24.881 } 00:09:24.881 }' 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:24.881 pt2 00:09:24.881 pt3 00:09:24.881 pt4' 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.881 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:25.141 [2024-12-14 04:58:35.809707] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fd5df327-b5e7-4c30-a961-23fc8dbfdee5 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fd5df327-b5e7-4c30-a961-23fc8dbfdee5 ']' 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.141 [2024-12-14 04:58:35.857316] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:25.141 [2024-12-14 04:58:35.857388] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.141 [2024-12-14 04:58:35.857467] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.141 [2024-12-14 04:58:35.857545] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.141 [2024-12-14 04:58:35.857555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.141 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:25.142 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.142 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:25.142 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.142 04:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.142 [2024-12-14 04:58:36.005107] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:25.142 [2024-12-14 04:58:36.006920] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:25.142 [2024-12-14 04:58:36.007018] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:25.142 [2024-12-14 04:58:36.007050] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:25.142 [2024-12-14 04:58:36.007105] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:25.142 [2024-12-14 04:58:36.007167] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:25.142 [2024-12-14 04:58:36.007190] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:25.142 [2024-12-14 04:58:36.007213] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:25.142 [2024-12-14 04:58:36.007244] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:25.142 [2024-12-14 04:58:36.007253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:25.142 request: 00:09:25.142 { 00:09:25.142 "name": "raid_bdev1", 00:09:25.142 "raid_level": "raid0", 00:09:25.142 "base_bdevs": [ 00:09:25.142 "malloc1", 00:09:25.142 "malloc2", 00:09:25.142 "malloc3", 00:09:25.142 "malloc4" 00:09:25.142 ], 00:09:25.142 "strip_size_kb": 64, 00:09:25.142 "superblock": false, 00:09:25.142 "method": "bdev_raid_create", 00:09:25.142 "req_id": 1 00:09:25.142 } 00:09:25.142 Got JSON-RPC error response 00:09:25.142 response: 00:09:25.142 { 00:09:25.142 "code": -17, 00:09:25.142 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:25.142 } 00:09:25.142 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:25.142 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:25.142 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:25.142 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:25.142 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:25.142 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:25.142 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.142 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.142 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.402 [2024-12-14 04:58:36.060984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:25.402 [2024-12-14 04:58:36.061097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.402 [2024-12-14 04:58:36.061134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:25.402 [2024-12-14 04:58:36.061174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.402 [2024-12-14 04:58:36.063241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.402 [2024-12-14 04:58:36.063316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:25.402 [2024-12-14 04:58:36.063405] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:25.402 [2024-12-14 04:58:36.063484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:25.402 pt1 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.402 "name": "raid_bdev1", 00:09:25.402 "uuid": "fd5df327-b5e7-4c30-a961-23fc8dbfdee5", 00:09:25.402 "strip_size_kb": 64, 00:09:25.402 "state": "configuring", 00:09:25.402 "raid_level": "raid0", 00:09:25.402 "superblock": true, 00:09:25.402 "num_base_bdevs": 4, 00:09:25.402 "num_base_bdevs_discovered": 1, 00:09:25.402 "num_base_bdevs_operational": 4, 00:09:25.402 "base_bdevs_list": [ 00:09:25.402 { 00:09:25.402 "name": "pt1", 00:09:25.402 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:25.402 "is_configured": true, 00:09:25.402 "data_offset": 2048, 00:09:25.402 "data_size": 63488 00:09:25.402 }, 00:09:25.402 { 00:09:25.402 "name": null, 00:09:25.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:25.402 "is_configured": false, 00:09:25.402 "data_offset": 2048, 00:09:25.402 "data_size": 63488 00:09:25.402 }, 00:09:25.402 { 00:09:25.402 "name": null, 00:09:25.402 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:25.402 "is_configured": false, 00:09:25.402 "data_offset": 2048, 00:09:25.402 "data_size": 63488 00:09:25.402 }, 00:09:25.402 { 00:09:25.402 "name": null, 00:09:25.402 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:25.402 "is_configured": false, 00:09:25.402 "data_offset": 2048, 00:09:25.402 "data_size": 63488 00:09:25.402 } 00:09:25.402 ] 00:09:25.402 }' 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.402 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.662 [2024-12-14 04:58:36.496261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:25.662 [2024-12-14 04:58:36.496313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.662 [2024-12-14 04:58:36.496332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:09:25.662 [2024-12-14 04:58:36.496342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.662 [2024-12-14 04:58:36.496710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.662 [2024-12-14 04:58:36.496726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:25.662 [2024-12-14 04:58:36.496787] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:25.662 [2024-12-14 04:58:36.496812] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:25.662 pt2 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.662 [2024-12-14 04:58:36.508254] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.662 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.922 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.922 "name": "raid_bdev1", 00:09:25.922 "uuid": "fd5df327-b5e7-4c30-a961-23fc8dbfdee5", 00:09:25.922 "strip_size_kb": 64, 00:09:25.922 "state": "configuring", 00:09:25.922 "raid_level": "raid0", 00:09:25.922 "superblock": true, 00:09:25.922 "num_base_bdevs": 4, 00:09:25.922 "num_base_bdevs_discovered": 1, 00:09:25.922 "num_base_bdevs_operational": 4, 00:09:25.922 "base_bdevs_list": [ 00:09:25.922 { 00:09:25.922 "name": "pt1", 00:09:25.922 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:25.922 "is_configured": true, 00:09:25.922 "data_offset": 2048, 00:09:25.922 "data_size": 63488 00:09:25.922 }, 00:09:25.922 { 00:09:25.922 "name": null, 00:09:25.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:25.922 "is_configured": false, 00:09:25.922 "data_offset": 0, 00:09:25.922 "data_size": 63488 00:09:25.922 }, 00:09:25.922 { 00:09:25.922 "name": null, 00:09:25.922 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:25.922 "is_configured": false, 00:09:25.922 "data_offset": 2048, 00:09:25.922 "data_size": 63488 00:09:25.922 }, 00:09:25.922 { 00:09:25.922 "name": null, 00:09:25.922 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:25.922 "is_configured": false, 00:09:25.922 "data_offset": 2048, 00:09:25.922 "data_size": 63488 00:09:25.922 } 00:09:25.922 ] 00:09:25.922 }' 00:09:25.922 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.922 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.183 [2024-12-14 04:58:36.899553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:26.183 [2024-12-14 04:58:36.899652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.183 [2024-12-14 04:58:36.899684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:26.183 [2024-12-14 04:58:36.899713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.183 [2024-12-14 04:58:36.900094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.183 [2024-12-14 04:58:36.900171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:26.183 [2024-12-14 04:58:36.900280] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:26.183 [2024-12-14 04:58:36.900338] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:26.183 pt2 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.183 [2024-12-14 04:58:36.911501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:26.183 [2024-12-14 04:58:36.911590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.183 [2024-12-14 04:58:36.911625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:26.183 [2024-12-14 04:58:36.911654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.183 [2024-12-14 04:58:36.912005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.183 [2024-12-14 04:58:36.912072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:26.183 [2024-12-14 04:58:36.912182] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:26.183 [2024-12-14 04:58:36.912241] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:26.183 pt3 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.183 [2024-12-14 04:58:36.923489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:26.183 [2024-12-14 04:58:36.923539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.183 [2024-12-14 04:58:36.923554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:09:26.183 [2024-12-14 04:58:36.923563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.183 [2024-12-14 04:58:36.923838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.183 [2024-12-14 04:58:36.923855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:26.183 [2024-12-14 04:58:36.923902] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:26.183 [2024-12-14 04:58:36.923920] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:26.183 [2024-12-14 04:58:36.924006] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:26.183 [2024-12-14 04:58:36.924018] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:26.183 [2024-12-14 04:58:36.924263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:26.183 [2024-12-14 04:58:36.924388] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:26.183 [2024-12-14 04:58:36.924397] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:26.183 [2024-12-14 04:58:36.924514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.183 pt4 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.183 "name": "raid_bdev1", 00:09:26.183 "uuid": "fd5df327-b5e7-4c30-a961-23fc8dbfdee5", 00:09:26.183 "strip_size_kb": 64, 00:09:26.183 "state": "online", 00:09:26.183 "raid_level": "raid0", 00:09:26.183 "superblock": true, 00:09:26.183 "num_base_bdevs": 4, 00:09:26.183 "num_base_bdevs_discovered": 4, 00:09:26.183 "num_base_bdevs_operational": 4, 00:09:26.183 "base_bdevs_list": [ 00:09:26.183 { 00:09:26.183 "name": "pt1", 00:09:26.183 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:26.183 "is_configured": true, 00:09:26.183 "data_offset": 2048, 00:09:26.183 "data_size": 63488 00:09:26.183 }, 00:09:26.183 { 00:09:26.183 "name": "pt2", 00:09:26.183 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:26.183 "is_configured": true, 00:09:26.183 "data_offset": 2048, 00:09:26.183 "data_size": 63488 00:09:26.183 }, 00:09:26.183 { 00:09:26.183 "name": "pt3", 00:09:26.183 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:26.183 "is_configured": true, 00:09:26.183 "data_offset": 2048, 00:09:26.183 "data_size": 63488 00:09:26.183 }, 00:09:26.183 { 00:09:26.183 "name": "pt4", 00:09:26.183 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:26.183 "is_configured": true, 00:09:26.183 "data_offset": 2048, 00:09:26.183 "data_size": 63488 00:09:26.183 } 00:09:26.183 ] 00:09:26.183 }' 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.183 04:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.443 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:26.443 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:26.443 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:26.443 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:26.443 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:26.443 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:26.443 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:26.443 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.443 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.443 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:26.443 [2024-12-14 04:58:37.299211] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:26.443 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.703 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:26.703 "name": "raid_bdev1", 00:09:26.703 "aliases": [ 00:09:26.703 "fd5df327-b5e7-4c30-a961-23fc8dbfdee5" 00:09:26.703 ], 00:09:26.703 "product_name": "Raid Volume", 00:09:26.703 "block_size": 512, 00:09:26.703 "num_blocks": 253952, 00:09:26.703 "uuid": "fd5df327-b5e7-4c30-a961-23fc8dbfdee5", 00:09:26.703 "assigned_rate_limits": { 00:09:26.703 "rw_ios_per_sec": 0, 00:09:26.703 "rw_mbytes_per_sec": 0, 00:09:26.703 "r_mbytes_per_sec": 0, 00:09:26.703 "w_mbytes_per_sec": 0 00:09:26.703 }, 00:09:26.703 "claimed": false, 00:09:26.703 "zoned": false, 00:09:26.703 "supported_io_types": { 00:09:26.703 "read": true, 00:09:26.703 "write": true, 00:09:26.703 "unmap": true, 00:09:26.703 "flush": true, 00:09:26.703 "reset": true, 00:09:26.703 "nvme_admin": false, 00:09:26.703 "nvme_io": false, 00:09:26.703 "nvme_io_md": false, 00:09:26.703 "write_zeroes": true, 00:09:26.703 "zcopy": false, 00:09:26.703 "get_zone_info": false, 00:09:26.703 "zone_management": false, 00:09:26.703 "zone_append": false, 00:09:26.703 "compare": false, 00:09:26.703 "compare_and_write": false, 00:09:26.703 "abort": false, 00:09:26.703 "seek_hole": false, 00:09:26.703 "seek_data": false, 00:09:26.703 "copy": false, 00:09:26.703 "nvme_iov_md": false 00:09:26.703 }, 00:09:26.703 "memory_domains": [ 00:09:26.703 { 00:09:26.703 "dma_device_id": "system", 00:09:26.703 "dma_device_type": 1 00:09:26.703 }, 00:09:26.703 { 00:09:26.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.703 "dma_device_type": 2 00:09:26.703 }, 00:09:26.703 { 00:09:26.703 "dma_device_id": "system", 00:09:26.703 "dma_device_type": 1 00:09:26.703 }, 00:09:26.703 { 00:09:26.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.703 "dma_device_type": 2 00:09:26.703 }, 00:09:26.703 { 00:09:26.703 "dma_device_id": "system", 00:09:26.703 "dma_device_type": 1 00:09:26.703 }, 00:09:26.703 { 00:09:26.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.703 "dma_device_type": 2 00:09:26.703 }, 00:09:26.703 { 00:09:26.703 "dma_device_id": "system", 00:09:26.703 "dma_device_type": 1 00:09:26.703 }, 00:09:26.703 { 00:09:26.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.703 "dma_device_type": 2 00:09:26.703 } 00:09:26.703 ], 00:09:26.703 "driver_specific": { 00:09:26.703 "raid": { 00:09:26.703 "uuid": "fd5df327-b5e7-4c30-a961-23fc8dbfdee5", 00:09:26.703 "strip_size_kb": 64, 00:09:26.703 "state": "online", 00:09:26.703 "raid_level": "raid0", 00:09:26.703 "superblock": true, 00:09:26.703 "num_base_bdevs": 4, 00:09:26.703 "num_base_bdevs_discovered": 4, 00:09:26.703 "num_base_bdevs_operational": 4, 00:09:26.703 "base_bdevs_list": [ 00:09:26.703 { 00:09:26.703 "name": "pt1", 00:09:26.703 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:26.703 "is_configured": true, 00:09:26.703 "data_offset": 2048, 00:09:26.703 "data_size": 63488 00:09:26.703 }, 00:09:26.703 { 00:09:26.703 "name": "pt2", 00:09:26.703 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:26.703 "is_configured": true, 00:09:26.703 "data_offset": 2048, 00:09:26.703 "data_size": 63488 00:09:26.703 }, 00:09:26.703 { 00:09:26.703 "name": "pt3", 00:09:26.703 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:26.703 "is_configured": true, 00:09:26.703 "data_offset": 2048, 00:09:26.703 "data_size": 63488 00:09:26.703 }, 00:09:26.703 { 00:09:26.703 "name": "pt4", 00:09:26.703 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:26.703 "is_configured": true, 00:09:26.703 "data_offset": 2048, 00:09:26.703 "data_size": 63488 00:09:26.703 } 00:09:26.703 ] 00:09:26.703 } 00:09:26.703 } 00:09:26.703 }' 00:09:26.703 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:26.703 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:26.703 pt2 00:09:26.703 pt3 00:09:26.703 pt4' 00:09:26.703 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.703 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:26.703 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.703 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.703 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:26.703 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.703 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.703 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.703 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.703 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.703 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.703 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:26.703 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.703 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.703 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.703 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.704 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.704 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.704 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.704 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:26.704 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.704 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.704 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.704 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.704 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.704 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.704 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.704 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:26.704 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.704 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.704 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.704 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.968 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.968 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.968 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:26.968 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.968 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.968 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:26.968 [2024-12-14 04:58:37.610629] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:26.968 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.968 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fd5df327-b5e7-4c30-a961-23fc8dbfdee5 '!=' fd5df327-b5e7-4c30-a961-23fc8dbfdee5 ']' 00:09:26.968 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:26.968 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:26.968 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:26.968 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81668 00:09:26.968 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81668 ']' 00:09:26.968 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81668 00:09:26.968 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:26.968 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:26.968 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81668 00:09:26.968 killing process with pid 81668 00:09:26.968 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:26.968 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:26.968 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81668' 00:09:26.968 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 81668 00:09:26.968 [2024-12-14 04:58:37.689750] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:26.968 [2024-12-14 04:58:37.689833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.968 [2024-12-14 04:58:37.689898] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.968 [2024-12-14 04:58:37.689909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:26.968 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 81668 00:09:26.968 [2024-12-14 04:58:37.734011] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:27.229 04:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:27.229 00:09:27.229 real 0m3.976s 00:09:27.229 user 0m6.207s 00:09:27.229 sys 0m0.887s 00:09:27.229 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.229 04:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.229 ************************************ 00:09:27.229 END TEST raid_superblock_test 00:09:27.229 ************************************ 00:09:27.229 04:58:38 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:09:27.229 04:58:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:27.229 04:58:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.229 04:58:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:27.229 ************************************ 00:09:27.229 START TEST raid_read_error_test 00:09:27.229 ************************************ 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.s0oIbTYbQD 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81915 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81915 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 81915 ']' 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:27.229 04:58:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.489 [2024-12-14 04:58:38.150551] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:27.489 [2024-12-14 04:58:38.150687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81915 ] 00:09:27.489 [2024-12-14 04:58:38.309264] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.489 [2024-12-14 04:58:38.354407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.748 [2024-12-14 04:58:38.396423] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.748 [2024-12-14 04:58:38.396465] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.318 04:58:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.318 04:58:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:28.318 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.318 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:28.318 04:58:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.318 04:58:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.318 BaseBdev1_malloc 00:09:28.318 04:58:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.318 04:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:28.318 04:58:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.318 04:58:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.318 true 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.318 [2024-12-14 04:58:39.010347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:28.318 [2024-12-14 04:58:39.010426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.318 [2024-12-14 04:58:39.010446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:28.318 [2024-12-14 04:58:39.010455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.318 [2024-12-14 04:58:39.012551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.318 [2024-12-14 04:58:39.012595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:28.318 BaseBdev1 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.318 BaseBdev2_malloc 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.318 true 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.318 [2024-12-14 04:58:39.067855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:28.318 [2024-12-14 04:58:39.067924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.318 [2024-12-14 04:58:39.067950] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:28.318 [2024-12-14 04:58:39.067963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.318 [2024-12-14 04:58:39.070994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.318 [2024-12-14 04:58:39.071044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:28.318 BaseBdev2 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.318 BaseBdev3_malloc 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.318 true 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.318 [2024-12-14 04:58:39.108556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:28.318 [2024-12-14 04:58:39.108601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.318 [2024-12-14 04:58:39.108619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:28.318 [2024-12-14 04:58:39.108628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.318 [2024-12-14 04:58:39.110611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.318 [2024-12-14 04:58:39.110645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:28.318 BaseBdev3 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.318 BaseBdev4_malloc 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.318 true 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.318 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.319 [2024-12-14 04:58:39.149039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:28.319 [2024-12-14 04:58:39.149084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.319 [2024-12-14 04:58:39.149104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:28.319 [2024-12-14 04:58:39.149113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.319 [2024-12-14 04:58:39.151067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.319 [2024-12-14 04:58:39.151119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:28.319 BaseBdev4 00:09:28.319 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.319 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:28.319 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.319 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.319 [2024-12-14 04:58:39.157089] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.319 [2024-12-14 04:58:39.158922] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.319 [2024-12-14 04:58:39.159068] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:28.319 [2024-12-14 04:58:39.159184] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:28.319 [2024-12-14 04:58:39.159463] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:09:28.319 [2024-12-14 04:58:39.159517] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:28.319 [2024-12-14 04:58:39.159829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:28.319 [2024-12-14 04:58:39.160020] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:09:28.319 [2024-12-14 04:58:39.160072] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:09:28.319 [2024-12-14 04:58:39.160290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.319 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.319 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:28.319 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.319 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.319 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:28.319 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.319 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:28.319 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.319 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.319 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.319 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.319 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.319 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.319 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.319 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.319 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.578 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.578 "name": "raid_bdev1", 00:09:28.578 "uuid": "db15bdb7-5ba3-408b-bd40-92367a3963f3", 00:09:28.578 "strip_size_kb": 64, 00:09:28.578 "state": "online", 00:09:28.578 "raid_level": "raid0", 00:09:28.578 "superblock": true, 00:09:28.578 "num_base_bdevs": 4, 00:09:28.578 "num_base_bdevs_discovered": 4, 00:09:28.578 "num_base_bdevs_operational": 4, 00:09:28.578 "base_bdevs_list": [ 00:09:28.578 { 00:09:28.578 "name": "BaseBdev1", 00:09:28.578 "uuid": "8af61576-93ae-53a2-a06d-59a2cf2b0f5b", 00:09:28.578 "is_configured": true, 00:09:28.578 "data_offset": 2048, 00:09:28.578 "data_size": 63488 00:09:28.578 }, 00:09:28.578 { 00:09:28.578 "name": "BaseBdev2", 00:09:28.578 "uuid": "a8125047-00b2-550a-b747-4c41692d4cbf", 00:09:28.578 "is_configured": true, 00:09:28.578 "data_offset": 2048, 00:09:28.578 "data_size": 63488 00:09:28.578 }, 00:09:28.578 { 00:09:28.578 "name": "BaseBdev3", 00:09:28.578 "uuid": "bb81399d-44a6-5ced-9381-47bc862c349a", 00:09:28.578 "is_configured": true, 00:09:28.578 "data_offset": 2048, 00:09:28.578 "data_size": 63488 00:09:28.578 }, 00:09:28.578 { 00:09:28.578 "name": "BaseBdev4", 00:09:28.578 "uuid": "cf85b6d1-d5e2-529f-b910-dc192911431f", 00:09:28.578 "is_configured": true, 00:09:28.578 "data_offset": 2048, 00:09:28.578 "data_size": 63488 00:09:28.578 } 00:09:28.578 ] 00:09:28.578 }' 00:09:28.578 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.578 04:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.838 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:28.838 04:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:28.838 [2024-12-14 04:58:39.712485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:29.776 04:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:29.776 04:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.776 04:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.776 04:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.776 04:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:29.776 04:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:29.776 04:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:29.776 04:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:29.776 04:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.776 04:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.776 04:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.776 04:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.776 04:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:29.776 04:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.776 04:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.776 04:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.776 04:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.776 04:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.776 04:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.776 04:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.776 04:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.036 04:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.036 04:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.036 "name": "raid_bdev1", 00:09:30.036 "uuid": "db15bdb7-5ba3-408b-bd40-92367a3963f3", 00:09:30.036 "strip_size_kb": 64, 00:09:30.036 "state": "online", 00:09:30.036 "raid_level": "raid0", 00:09:30.036 "superblock": true, 00:09:30.036 "num_base_bdevs": 4, 00:09:30.036 "num_base_bdevs_discovered": 4, 00:09:30.036 "num_base_bdevs_operational": 4, 00:09:30.036 "base_bdevs_list": [ 00:09:30.036 { 00:09:30.036 "name": "BaseBdev1", 00:09:30.036 "uuid": "8af61576-93ae-53a2-a06d-59a2cf2b0f5b", 00:09:30.036 "is_configured": true, 00:09:30.036 "data_offset": 2048, 00:09:30.036 "data_size": 63488 00:09:30.036 }, 00:09:30.036 { 00:09:30.036 "name": "BaseBdev2", 00:09:30.036 "uuid": "a8125047-00b2-550a-b747-4c41692d4cbf", 00:09:30.036 "is_configured": true, 00:09:30.036 "data_offset": 2048, 00:09:30.036 "data_size": 63488 00:09:30.036 }, 00:09:30.036 { 00:09:30.036 "name": "BaseBdev3", 00:09:30.036 "uuid": "bb81399d-44a6-5ced-9381-47bc862c349a", 00:09:30.036 "is_configured": true, 00:09:30.036 "data_offset": 2048, 00:09:30.036 "data_size": 63488 00:09:30.036 }, 00:09:30.036 { 00:09:30.036 "name": "BaseBdev4", 00:09:30.036 "uuid": "cf85b6d1-d5e2-529f-b910-dc192911431f", 00:09:30.036 "is_configured": true, 00:09:30.036 "data_offset": 2048, 00:09:30.036 "data_size": 63488 00:09:30.036 } 00:09:30.036 ] 00:09:30.036 }' 00:09:30.036 04:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.036 04:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.297 04:58:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:30.297 04:58:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.297 04:58:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.297 [2024-12-14 04:58:41.072100] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:30.297 [2024-12-14 04:58:41.072202] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.297 [2024-12-14 04:58:41.074666] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.297 [2024-12-14 04:58:41.074764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.297 [2024-12-14 04:58:41.074832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.297 [2024-12-14 04:58:41.074890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:09:30.297 { 00:09:30.297 "results": [ 00:09:30.297 { 00:09:30.297 "job": "raid_bdev1", 00:09:30.297 "core_mask": "0x1", 00:09:30.297 "workload": "randrw", 00:09:30.297 "percentage": 50, 00:09:30.297 "status": "finished", 00:09:30.297 "queue_depth": 1, 00:09:30.297 "io_size": 131072, 00:09:30.297 "runtime": 1.360503, 00:09:30.297 "iops": 17185.555636408004, 00:09:30.297 "mibps": 2148.1944545510005, 00:09:30.297 "io_failed": 1, 00:09:30.297 "io_timeout": 0, 00:09:30.297 "avg_latency_us": 80.81198757376536, 00:09:30.297 "min_latency_us": 24.482096069868994, 00:09:30.297 "max_latency_us": 1316.4436681222708 00:09:30.297 } 00:09:30.297 ], 00:09:30.297 "core_count": 1 00:09:30.297 } 00:09:30.297 04:58:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.297 04:58:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81915 00:09:30.297 04:58:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 81915 ']' 00:09:30.297 04:58:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 81915 00:09:30.297 04:58:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:30.297 04:58:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:30.297 04:58:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81915 00:09:30.297 killing process with pid 81915 00:09:30.297 04:58:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:30.297 04:58:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:30.297 04:58:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81915' 00:09:30.297 04:58:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 81915 00:09:30.297 [2024-12-14 04:58:41.120738] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.297 04:58:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 81915 00:09:30.297 [2024-12-14 04:58:41.156519] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:30.557 04:58:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:30.557 04:58:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.s0oIbTYbQD 00:09:30.557 04:58:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:30.557 04:58:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:30.557 04:58:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:30.557 04:58:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:30.557 04:58:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:30.557 04:58:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:30.557 ************************************ 00:09:30.557 END TEST raid_read_error_test 00:09:30.557 ************************************ 00:09:30.557 00:09:30.557 real 0m3.356s 00:09:30.557 user 0m4.235s 00:09:30.557 sys 0m0.538s 00:09:30.557 04:58:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.557 04:58:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.816 04:58:41 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:09:30.816 04:58:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:30.816 04:58:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:30.816 04:58:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:30.816 ************************************ 00:09:30.816 START TEST raid_write_error_test 00:09:30.816 ************************************ 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MXayxRebyz 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82045 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82045 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 82045 ']' 00:09:30.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:30.816 04:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.816 [2024-12-14 04:58:41.579452] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:30.816 [2024-12-14 04:58:41.579659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82045 ] 00:09:31.075 [2024-12-14 04:58:41.740363] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.075 [2024-12-14 04:58:41.785786] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.075 [2024-12-14 04:58:41.827528] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.075 [2024-12-14 04:58:41.827563] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.645 BaseBdev1_malloc 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.645 true 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.645 [2024-12-14 04:58:42.425596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:31.645 [2024-12-14 04:58:42.425648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.645 [2024-12-14 04:58:42.425666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:31.645 [2024-12-14 04:58:42.425675] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.645 [2024-12-14 04:58:42.427764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.645 [2024-12-14 04:58:42.427802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:31.645 BaseBdev1 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.645 BaseBdev2_malloc 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.645 true 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.645 [2024-12-14 04:58:42.475562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:31.645 [2024-12-14 04:58:42.475628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.645 [2024-12-14 04:58:42.475653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:31.645 [2024-12-14 04:58:42.475666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.645 [2024-12-14 04:58:42.478493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.645 [2024-12-14 04:58:42.478541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:31.645 BaseBdev2 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.645 BaseBdev3_malloc 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.645 true 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.645 [2024-12-14 04:58:42.515936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:31.645 [2024-12-14 04:58:42.515984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.645 [2024-12-14 04:58:42.516002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:31.645 [2024-12-14 04:58:42.516010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.645 [2024-12-14 04:58:42.517997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.645 [2024-12-14 04:58:42.518090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:31.645 BaseBdev3 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.645 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.905 BaseBdev4_malloc 00:09:31.905 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.905 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:31.905 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.905 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.905 true 00:09:31.905 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.905 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:31.905 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.905 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.905 [2024-12-14 04:58:42.556372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:31.905 [2024-12-14 04:58:42.556419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.905 [2024-12-14 04:58:42.556449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:31.905 [2024-12-14 04:58:42.556458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.905 [2024-12-14 04:58:42.558420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.905 [2024-12-14 04:58:42.558505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:31.905 BaseBdev4 00:09:31.905 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.905 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:31.905 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.905 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.905 [2024-12-14 04:58:42.568406] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.905 [2024-12-14 04:58:42.570292] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:31.906 [2024-12-14 04:58:42.570375] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:31.906 [2024-12-14 04:58:42.570426] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:31.906 [2024-12-14 04:58:42.570633] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:09:31.906 [2024-12-14 04:58:42.570653] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:31.906 [2024-12-14 04:58:42.570889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:31.906 [2024-12-14 04:58:42.571017] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:09:31.906 [2024-12-14 04:58:42.571029] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:09:31.906 [2024-12-14 04:58:42.571162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.906 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.906 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:31.906 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.906 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.906 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.906 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.906 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:31.906 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.906 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.906 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.906 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.906 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.906 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.906 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.906 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.906 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.906 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.906 "name": "raid_bdev1", 00:09:31.906 "uuid": "6115ec70-dc33-42a6-b062-c45d97090657", 00:09:31.906 "strip_size_kb": 64, 00:09:31.906 "state": "online", 00:09:31.906 "raid_level": "raid0", 00:09:31.906 "superblock": true, 00:09:31.906 "num_base_bdevs": 4, 00:09:31.906 "num_base_bdevs_discovered": 4, 00:09:31.906 "num_base_bdevs_operational": 4, 00:09:31.906 "base_bdevs_list": [ 00:09:31.906 { 00:09:31.906 "name": "BaseBdev1", 00:09:31.906 "uuid": "188fb04f-b95b-594a-b090-7cd4c778c0ee", 00:09:31.906 "is_configured": true, 00:09:31.906 "data_offset": 2048, 00:09:31.906 "data_size": 63488 00:09:31.906 }, 00:09:31.906 { 00:09:31.906 "name": "BaseBdev2", 00:09:31.906 "uuid": "b8d94354-8ed8-5f87-af2d-a5ff373a17e1", 00:09:31.906 "is_configured": true, 00:09:31.906 "data_offset": 2048, 00:09:31.906 "data_size": 63488 00:09:31.906 }, 00:09:31.906 { 00:09:31.906 "name": "BaseBdev3", 00:09:31.906 "uuid": "23a29135-78d3-5373-9fee-f8bc200459c9", 00:09:31.906 "is_configured": true, 00:09:31.906 "data_offset": 2048, 00:09:31.906 "data_size": 63488 00:09:31.906 }, 00:09:31.906 { 00:09:31.906 "name": "BaseBdev4", 00:09:31.906 "uuid": "59c804e9-7a39-597b-89d0-79a56b03c6a5", 00:09:31.906 "is_configured": true, 00:09:31.906 "data_offset": 2048, 00:09:31.906 "data_size": 63488 00:09:31.906 } 00:09:31.906 ] 00:09:31.906 }' 00:09:31.906 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.906 04:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.165 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:32.165 04:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:32.425 [2024-12-14 04:58:43.075872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:33.365 04:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:33.365 04:58:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.365 04:58:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.365 04:58:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.365 04:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:33.365 04:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:33.365 04:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:33.365 04:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:33.365 04:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.365 04:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.365 04:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.365 04:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.365 04:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.365 04:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.365 04:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.365 04:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.365 04:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.365 04:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.365 04:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.365 04:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.365 04:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.365 04:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.365 04:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.365 "name": "raid_bdev1", 00:09:33.365 "uuid": "6115ec70-dc33-42a6-b062-c45d97090657", 00:09:33.365 "strip_size_kb": 64, 00:09:33.365 "state": "online", 00:09:33.365 "raid_level": "raid0", 00:09:33.365 "superblock": true, 00:09:33.365 "num_base_bdevs": 4, 00:09:33.365 "num_base_bdevs_discovered": 4, 00:09:33.365 "num_base_bdevs_operational": 4, 00:09:33.365 "base_bdevs_list": [ 00:09:33.365 { 00:09:33.365 "name": "BaseBdev1", 00:09:33.365 "uuid": "188fb04f-b95b-594a-b090-7cd4c778c0ee", 00:09:33.365 "is_configured": true, 00:09:33.365 "data_offset": 2048, 00:09:33.365 "data_size": 63488 00:09:33.365 }, 00:09:33.365 { 00:09:33.365 "name": "BaseBdev2", 00:09:33.365 "uuid": "b8d94354-8ed8-5f87-af2d-a5ff373a17e1", 00:09:33.365 "is_configured": true, 00:09:33.365 "data_offset": 2048, 00:09:33.365 "data_size": 63488 00:09:33.365 }, 00:09:33.365 { 00:09:33.365 "name": "BaseBdev3", 00:09:33.365 "uuid": "23a29135-78d3-5373-9fee-f8bc200459c9", 00:09:33.365 "is_configured": true, 00:09:33.365 "data_offset": 2048, 00:09:33.365 "data_size": 63488 00:09:33.365 }, 00:09:33.365 { 00:09:33.365 "name": "BaseBdev4", 00:09:33.365 "uuid": "59c804e9-7a39-597b-89d0-79a56b03c6a5", 00:09:33.365 "is_configured": true, 00:09:33.365 "data_offset": 2048, 00:09:33.365 "data_size": 63488 00:09:33.365 } 00:09:33.365 ] 00:09:33.365 }' 00:09:33.365 04:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.365 04:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.625 04:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:33.625 04:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.625 04:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.625 [2024-12-14 04:58:44.447751] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:33.625 [2024-12-14 04:58:44.447847] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:33.625 [2024-12-14 04:58:44.450354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.625 [2024-12-14 04:58:44.450457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.625 [2024-12-14 04:58:44.450528] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:33.625 [2024-12-14 04:58:44.450589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:09:33.625 { 00:09:33.625 "results": [ 00:09:33.625 { 00:09:33.625 "job": "raid_bdev1", 00:09:33.625 "core_mask": "0x1", 00:09:33.625 "workload": "randrw", 00:09:33.625 "percentage": 50, 00:09:33.625 "status": "finished", 00:09:33.625 "queue_depth": 1, 00:09:33.625 "io_size": 131072, 00:09:33.625 "runtime": 1.372796, 00:09:33.625 "iops": 17220.32989606613, 00:09:33.625 "mibps": 2152.5412370082663, 00:09:33.625 "io_failed": 1, 00:09:33.625 "io_timeout": 0, 00:09:33.625 "avg_latency_us": 80.63577328189184, 00:09:33.625 "min_latency_us": 24.258515283842794, 00:09:33.625 "max_latency_us": 1366.5257641921398 00:09:33.625 } 00:09:33.625 ], 00:09:33.625 "core_count": 1 00:09:33.625 } 00:09:33.625 04:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.625 04:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82045 00:09:33.625 04:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 82045 ']' 00:09:33.625 04:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 82045 00:09:33.625 04:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:33.625 04:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:33.625 04:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82045 00:09:33.625 04:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:33.625 04:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:33.625 killing process with pid 82045 00:09:33.625 04:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82045' 00:09:33.625 04:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 82045 00:09:33.625 [2024-12-14 04:58:44.487008] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:33.625 04:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 82045 00:09:33.915 [2024-12-14 04:58:44.522294] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.915 04:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:33.915 04:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MXayxRebyz 00:09:33.915 04:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:33.915 04:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:33.915 04:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:33.915 04:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:33.915 04:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:33.915 ************************************ 00:09:33.915 END TEST raid_write_error_test 00:09:33.915 ************************************ 00:09:33.915 04:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:33.915 00:09:33.915 real 0m3.287s 00:09:33.915 user 0m4.092s 00:09:33.915 sys 0m0.558s 00:09:33.915 04:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.915 04:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.176 04:58:44 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:34.176 04:58:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:09:34.176 04:58:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:34.176 04:58:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.176 04:58:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:34.176 ************************************ 00:09:34.176 START TEST raid_state_function_test 00:09:34.176 ************************************ 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82172 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:34.176 Process raid pid: 82172 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82172' 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82172 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82172 ']' 00:09:34.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.176 04:58:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.176 [2024-12-14 04:58:44.932378] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:34.176 [2024-12-14 04:58:44.932530] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.435 [2024-12-14 04:58:45.093367] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.435 [2024-12-14 04:58:45.139641] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.435 [2024-12-14 04:58:45.181482] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.435 [2024-12-14 04:58:45.181518] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.004 04:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.005 [2024-12-14 04:58:45.759059] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:35.005 [2024-12-14 04:58:45.759111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:35.005 [2024-12-14 04:58:45.759122] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:35.005 [2024-12-14 04:58:45.759131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:35.005 [2024-12-14 04:58:45.759137] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:35.005 [2024-12-14 04:58:45.759149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:35.005 [2024-12-14 04:58:45.759155] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:35.005 [2024-12-14 04:58:45.759173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.005 "name": "Existed_Raid", 00:09:35.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.005 "strip_size_kb": 64, 00:09:35.005 "state": "configuring", 00:09:35.005 "raid_level": "concat", 00:09:35.005 "superblock": false, 00:09:35.005 "num_base_bdevs": 4, 00:09:35.005 "num_base_bdevs_discovered": 0, 00:09:35.005 "num_base_bdevs_operational": 4, 00:09:35.005 "base_bdevs_list": [ 00:09:35.005 { 00:09:35.005 "name": "BaseBdev1", 00:09:35.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.005 "is_configured": false, 00:09:35.005 "data_offset": 0, 00:09:35.005 "data_size": 0 00:09:35.005 }, 00:09:35.005 { 00:09:35.005 "name": "BaseBdev2", 00:09:35.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.005 "is_configured": false, 00:09:35.005 "data_offset": 0, 00:09:35.005 "data_size": 0 00:09:35.005 }, 00:09:35.005 { 00:09:35.005 "name": "BaseBdev3", 00:09:35.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.005 "is_configured": false, 00:09:35.005 "data_offset": 0, 00:09:35.005 "data_size": 0 00:09:35.005 }, 00:09:35.005 { 00:09:35.005 "name": "BaseBdev4", 00:09:35.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.005 "is_configured": false, 00:09:35.005 "data_offset": 0, 00:09:35.005 "data_size": 0 00:09:35.005 } 00:09:35.005 ] 00:09:35.005 }' 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.005 04:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.575 [2024-12-14 04:58:46.202204] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:35.575 [2024-12-14 04:58:46.202289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.575 [2024-12-14 04:58:46.214222] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:35.575 [2024-12-14 04:58:46.214262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:35.575 [2024-12-14 04:58:46.214271] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:35.575 [2024-12-14 04:58:46.214279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:35.575 [2024-12-14 04:58:46.214285] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:35.575 [2024-12-14 04:58:46.214294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:35.575 [2024-12-14 04:58:46.214299] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:35.575 [2024-12-14 04:58:46.214307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.575 [2024-12-14 04:58:46.234879] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.575 BaseBdev1 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.575 [ 00:09:35.575 { 00:09:35.575 "name": "BaseBdev1", 00:09:35.575 "aliases": [ 00:09:35.575 "7e94543f-9959-4020-9075-9b3cab3608fd" 00:09:35.575 ], 00:09:35.575 "product_name": "Malloc disk", 00:09:35.575 "block_size": 512, 00:09:35.575 "num_blocks": 65536, 00:09:35.575 "uuid": "7e94543f-9959-4020-9075-9b3cab3608fd", 00:09:35.575 "assigned_rate_limits": { 00:09:35.575 "rw_ios_per_sec": 0, 00:09:35.575 "rw_mbytes_per_sec": 0, 00:09:35.575 "r_mbytes_per_sec": 0, 00:09:35.575 "w_mbytes_per_sec": 0 00:09:35.575 }, 00:09:35.575 "claimed": true, 00:09:35.575 "claim_type": "exclusive_write", 00:09:35.575 "zoned": false, 00:09:35.575 "supported_io_types": { 00:09:35.575 "read": true, 00:09:35.575 "write": true, 00:09:35.575 "unmap": true, 00:09:35.575 "flush": true, 00:09:35.575 "reset": true, 00:09:35.575 "nvme_admin": false, 00:09:35.575 "nvme_io": false, 00:09:35.575 "nvme_io_md": false, 00:09:35.575 "write_zeroes": true, 00:09:35.575 "zcopy": true, 00:09:35.575 "get_zone_info": false, 00:09:35.575 "zone_management": false, 00:09:35.575 "zone_append": false, 00:09:35.575 "compare": false, 00:09:35.575 "compare_and_write": false, 00:09:35.575 "abort": true, 00:09:35.575 "seek_hole": false, 00:09:35.575 "seek_data": false, 00:09:35.575 "copy": true, 00:09:35.575 "nvme_iov_md": false 00:09:35.575 }, 00:09:35.575 "memory_domains": [ 00:09:35.575 { 00:09:35.575 "dma_device_id": "system", 00:09:35.575 "dma_device_type": 1 00:09:35.575 }, 00:09:35.575 { 00:09:35.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.575 "dma_device_type": 2 00:09:35.575 } 00:09:35.575 ], 00:09:35.575 "driver_specific": {} 00:09:35.575 } 00:09:35.575 ] 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.575 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.575 "name": "Existed_Raid", 00:09:35.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.575 "strip_size_kb": 64, 00:09:35.575 "state": "configuring", 00:09:35.575 "raid_level": "concat", 00:09:35.575 "superblock": false, 00:09:35.575 "num_base_bdevs": 4, 00:09:35.575 "num_base_bdevs_discovered": 1, 00:09:35.575 "num_base_bdevs_operational": 4, 00:09:35.575 "base_bdevs_list": [ 00:09:35.575 { 00:09:35.575 "name": "BaseBdev1", 00:09:35.575 "uuid": "7e94543f-9959-4020-9075-9b3cab3608fd", 00:09:35.576 "is_configured": true, 00:09:35.576 "data_offset": 0, 00:09:35.576 "data_size": 65536 00:09:35.576 }, 00:09:35.576 { 00:09:35.576 "name": "BaseBdev2", 00:09:35.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.576 "is_configured": false, 00:09:35.576 "data_offset": 0, 00:09:35.576 "data_size": 0 00:09:35.576 }, 00:09:35.576 { 00:09:35.576 "name": "BaseBdev3", 00:09:35.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.576 "is_configured": false, 00:09:35.576 "data_offset": 0, 00:09:35.576 "data_size": 0 00:09:35.576 }, 00:09:35.576 { 00:09:35.576 "name": "BaseBdev4", 00:09:35.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.576 "is_configured": false, 00:09:35.576 "data_offset": 0, 00:09:35.576 "data_size": 0 00:09:35.576 } 00:09:35.576 ] 00:09:35.576 }' 00:09:35.576 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.576 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.836 [2024-12-14 04:58:46.682176] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:35.836 [2024-12-14 04:58:46.682221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.836 [2024-12-14 04:58:46.698194] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.836 [2024-12-14 04:58:46.700094] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:35.836 [2024-12-14 04:58:46.700182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:35.836 [2024-12-14 04:58:46.700230] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:35.836 [2024-12-14 04:58:46.700264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:35.836 [2024-12-14 04:58:46.700310] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:35.836 [2024-12-14 04:58:46.700342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.836 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.095 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.095 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.095 "name": "Existed_Raid", 00:09:36.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.095 "strip_size_kb": 64, 00:09:36.095 "state": "configuring", 00:09:36.095 "raid_level": "concat", 00:09:36.095 "superblock": false, 00:09:36.095 "num_base_bdevs": 4, 00:09:36.095 "num_base_bdevs_discovered": 1, 00:09:36.095 "num_base_bdevs_operational": 4, 00:09:36.095 "base_bdevs_list": [ 00:09:36.095 { 00:09:36.095 "name": "BaseBdev1", 00:09:36.095 "uuid": "7e94543f-9959-4020-9075-9b3cab3608fd", 00:09:36.095 "is_configured": true, 00:09:36.095 "data_offset": 0, 00:09:36.095 "data_size": 65536 00:09:36.095 }, 00:09:36.095 { 00:09:36.095 "name": "BaseBdev2", 00:09:36.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.095 "is_configured": false, 00:09:36.095 "data_offset": 0, 00:09:36.095 "data_size": 0 00:09:36.095 }, 00:09:36.095 { 00:09:36.095 "name": "BaseBdev3", 00:09:36.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.095 "is_configured": false, 00:09:36.095 "data_offset": 0, 00:09:36.095 "data_size": 0 00:09:36.095 }, 00:09:36.095 { 00:09:36.095 "name": "BaseBdev4", 00:09:36.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.095 "is_configured": false, 00:09:36.095 "data_offset": 0, 00:09:36.095 "data_size": 0 00:09:36.095 } 00:09:36.095 ] 00:09:36.095 }' 00:09:36.095 04:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.095 04:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.355 [2024-12-14 04:58:47.186779] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:36.355 BaseBdev2 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.355 [ 00:09:36.355 { 00:09:36.355 "name": "BaseBdev2", 00:09:36.355 "aliases": [ 00:09:36.355 "2346379d-84b0-4c13-9421-650464d68e5c" 00:09:36.355 ], 00:09:36.355 "product_name": "Malloc disk", 00:09:36.355 "block_size": 512, 00:09:36.355 "num_blocks": 65536, 00:09:36.355 "uuid": "2346379d-84b0-4c13-9421-650464d68e5c", 00:09:36.355 "assigned_rate_limits": { 00:09:36.355 "rw_ios_per_sec": 0, 00:09:36.355 "rw_mbytes_per_sec": 0, 00:09:36.355 "r_mbytes_per_sec": 0, 00:09:36.355 "w_mbytes_per_sec": 0 00:09:36.355 }, 00:09:36.355 "claimed": true, 00:09:36.355 "claim_type": "exclusive_write", 00:09:36.355 "zoned": false, 00:09:36.355 "supported_io_types": { 00:09:36.355 "read": true, 00:09:36.355 "write": true, 00:09:36.355 "unmap": true, 00:09:36.355 "flush": true, 00:09:36.355 "reset": true, 00:09:36.355 "nvme_admin": false, 00:09:36.355 "nvme_io": false, 00:09:36.355 "nvme_io_md": false, 00:09:36.355 "write_zeroes": true, 00:09:36.355 "zcopy": true, 00:09:36.355 "get_zone_info": false, 00:09:36.355 "zone_management": false, 00:09:36.355 "zone_append": false, 00:09:36.355 "compare": false, 00:09:36.355 "compare_and_write": false, 00:09:36.355 "abort": true, 00:09:36.355 "seek_hole": false, 00:09:36.355 "seek_data": false, 00:09:36.355 "copy": true, 00:09:36.355 "nvme_iov_md": false 00:09:36.355 }, 00:09:36.355 "memory_domains": [ 00:09:36.355 { 00:09:36.355 "dma_device_id": "system", 00:09:36.355 "dma_device_type": 1 00:09:36.355 }, 00:09:36.355 { 00:09:36.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.355 "dma_device_type": 2 00:09:36.355 } 00:09:36.355 ], 00:09:36.355 "driver_specific": {} 00:09:36.355 } 00:09:36.355 ] 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.355 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.615 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.615 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.615 "name": "Existed_Raid", 00:09:36.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.615 "strip_size_kb": 64, 00:09:36.615 "state": "configuring", 00:09:36.615 "raid_level": "concat", 00:09:36.615 "superblock": false, 00:09:36.615 "num_base_bdevs": 4, 00:09:36.615 "num_base_bdevs_discovered": 2, 00:09:36.615 "num_base_bdevs_operational": 4, 00:09:36.615 "base_bdevs_list": [ 00:09:36.615 { 00:09:36.615 "name": "BaseBdev1", 00:09:36.615 "uuid": "7e94543f-9959-4020-9075-9b3cab3608fd", 00:09:36.615 "is_configured": true, 00:09:36.615 "data_offset": 0, 00:09:36.615 "data_size": 65536 00:09:36.615 }, 00:09:36.615 { 00:09:36.615 "name": "BaseBdev2", 00:09:36.615 "uuid": "2346379d-84b0-4c13-9421-650464d68e5c", 00:09:36.615 "is_configured": true, 00:09:36.615 "data_offset": 0, 00:09:36.615 "data_size": 65536 00:09:36.615 }, 00:09:36.615 { 00:09:36.615 "name": "BaseBdev3", 00:09:36.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.615 "is_configured": false, 00:09:36.615 "data_offset": 0, 00:09:36.615 "data_size": 0 00:09:36.615 }, 00:09:36.615 { 00:09:36.615 "name": "BaseBdev4", 00:09:36.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.615 "is_configured": false, 00:09:36.615 "data_offset": 0, 00:09:36.615 "data_size": 0 00:09:36.615 } 00:09:36.615 ] 00:09:36.615 }' 00:09:36.615 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.615 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.877 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.878 [2024-12-14 04:58:47.648809] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.878 BaseBdev3 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.878 [ 00:09:36.878 { 00:09:36.878 "name": "BaseBdev3", 00:09:36.878 "aliases": [ 00:09:36.878 "967bc60b-efde-42a9-8025-76b3c478fb53" 00:09:36.878 ], 00:09:36.878 "product_name": "Malloc disk", 00:09:36.878 "block_size": 512, 00:09:36.878 "num_blocks": 65536, 00:09:36.878 "uuid": "967bc60b-efde-42a9-8025-76b3c478fb53", 00:09:36.878 "assigned_rate_limits": { 00:09:36.878 "rw_ios_per_sec": 0, 00:09:36.878 "rw_mbytes_per_sec": 0, 00:09:36.878 "r_mbytes_per_sec": 0, 00:09:36.878 "w_mbytes_per_sec": 0 00:09:36.878 }, 00:09:36.878 "claimed": true, 00:09:36.878 "claim_type": "exclusive_write", 00:09:36.878 "zoned": false, 00:09:36.878 "supported_io_types": { 00:09:36.878 "read": true, 00:09:36.878 "write": true, 00:09:36.878 "unmap": true, 00:09:36.878 "flush": true, 00:09:36.878 "reset": true, 00:09:36.878 "nvme_admin": false, 00:09:36.878 "nvme_io": false, 00:09:36.878 "nvme_io_md": false, 00:09:36.878 "write_zeroes": true, 00:09:36.878 "zcopy": true, 00:09:36.878 "get_zone_info": false, 00:09:36.878 "zone_management": false, 00:09:36.878 "zone_append": false, 00:09:36.878 "compare": false, 00:09:36.878 "compare_and_write": false, 00:09:36.878 "abort": true, 00:09:36.878 "seek_hole": false, 00:09:36.878 "seek_data": false, 00:09:36.878 "copy": true, 00:09:36.878 "nvme_iov_md": false 00:09:36.878 }, 00:09:36.878 "memory_domains": [ 00:09:36.878 { 00:09:36.878 "dma_device_id": "system", 00:09:36.878 "dma_device_type": 1 00:09:36.878 }, 00:09:36.878 { 00:09:36.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.878 "dma_device_type": 2 00:09:36.878 } 00:09:36.878 ], 00:09:36.878 "driver_specific": {} 00:09:36.878 } 00:09:36.878 ] 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.878 "name": "Existed_Raid", 00:09:36.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.878 "strip_size_kb": 64, 00:09:36.878 "state": "configuring", 00:09:36.878 "raid_level": "concat", 00:09:36.878 "superblock": false, 00:09:36.878 "num_base_bdevs": 4, 00:09:36.878 "num_base_bdevs_discovered": 3, 00:09:36.878 "num_base_bdevs_operational": 4, 00:09:36.878 "base_bdevs_list": [ 00:09:36.878 { 00:09:36.878 "name": "BaseBdev1", 00:09:36.878 "uuid": "7e94543f-9959-4020-9075-9b3cab3608fd", 00:09:36.878 "is_configured": true, 00:09:36.878 "data_offset": 0, 00:09:36.878 "data_size": 65536 00:09:36.878 }, 00:09:36.878 { 00:09:36.878 "name": "BaseBdev2", 00:09:36.878 "uuid": "2346379d-84b0-4c13-9421-650464d68e5c", 00:09:36.878 "is_configured": true, 00:09:36.878 "data_offset": 0, 00:09:36.878 "data_size": 65536 00:09:36.878 }, 00:09:36.878 { 00:09:36.878 "name": "BaseBdev3", 00:09:36.878 "uuid": "967bc60b-efde-42a9-8025-76b3c478fb53", 00:09:36.878 "is_configured": true, 00:09:36.878 "data_offset": 0, 00:09:36.878 "data_size": 65536 00:09:36.878 }, 00:09:36.878 { 00:09:36.878 "name": "BaseBdev4", 00:09:36.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.878 "is_configured": false, 00:09:36.878 "data_offset": 0, 00:09:36.878 "data_size": 0 00:09:36.878 } 00:09:36.878 ] 00:09:36.878 }' 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.878 04:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.475 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:37.475 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.475 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.475 [2024-12-14 04:58:48.127070] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:37.475 [2024-12-14 04:58:48.127127] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:37.475 [2024-12-14 04:58:48.127138] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:37.475 [2024-12-14 04:58:48.127459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:37.475 [2024-12-14 04:58:48.127636] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:37.475 [2024-12-14 04:58:48.127661] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:37.475 [2024-12-14 04:58:48.127873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.475 BaseBdev4 00:09:37.475 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.475 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:37.475 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:37.475 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:37.475 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:37.475 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:37.475 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:37.475 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:37.475 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.475 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.475 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.475 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:37.475 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.475 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.475 [ 00:09:37.475 { 00:09:37.475 "name": "BaseBdev4", 00:09:37.475 "aliases": [ 00:09:37.475 "3e174327-483a-4783-b751-799fb47975d0" 00:09:37.475 ], 00:09:37.475 "product_name": "Malloc disk", 00:09:37.475 "block_size": 512, 00:09:37.475 "num_blocks": 65536, 00:09:37.475 "uuid": "3e174327-483a-4783-b751-799fb47975d0", 00:09:37.475 "assigned_rate_limits": { 00:09:37.475 "rw_ios_per_sec": 0, 00:09:37.475 "rw_mbytes_per_sec": 0, 00:09:37.475 "r_mbytes_per_sec": 0, 00:09:37.475 "w_mbytes_per_sec": 0 00:09:37.475 }, 00:09:37.475 "claimed": true, 00:09:37.475 "claim_type": "exclusive_write", 00:09:37.475 "zoned": false, 00:09:37.475 "supported_io_types": { 00:09:37.475 "read": true, 00:09:37.475 "write": true, 00:09:37.475 "unmap": true, 00:09:37.475 "flush": true, 00:09:37.475 "reset": true, 00:09:37.475 "nvme_admin": false, 00:09:37.475 "nvme_io": false, 00:09:37.475 "nvme_io_md": false, 00:09:37.475 "write_zeroes": true, 00:09:37.475 "zcopy": true, 00:09:37.475 "get_zone_info": false, 00:09:37.475 "zone_management": false, 00:09:37.475 "zone_append": false, 00:09:37.475 "compare": false, 00:09:37.475 "compare_and_write": false, 00:09:37.475 "abort": true, 00:09:37.475 "seek_hole": false, 00:09:37.475 "seek_data": false, 00:09:37.475 "copy": true, 00:09:37.475 "nvme_iov_md": false 00:09:37.475 }, 00:09:37.475 "memory_domains": [ 00:09:37.475 { 00:09:37.475 "dma_device_id": "system", 00:09:37.475 "dma_device_type": 1 00:09:37.475 }, 00:09:37.475 { 00:09:37.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.476 "dma_device_type": 2 00:09:37.476 } 00:09:37.476 ], 00:09:37.476 "driver_specific": {} 00:09:37.476 } 00:09:37.476 ] 00:09:37.476 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.476 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:37.476 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:37.476 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:37.476 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:37.476 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.476 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.476 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.476 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.476 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.476 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.476 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.476 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.476 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.476 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.476 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.476 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.476 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.476 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.476 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.476 "name": "Existed_Raid", 00:09:37.476 "uuid": "c45edfc5-0384-48cc-af5c-e93e7976ddf8", 00:09:37.476 "strip_size_kb": 64, 00:09:37.476 "state": "online", 00:09:37.476 "raid_level": "concat", 00:09:37.476 "superblock": false, 00:09:37.476 "num_base_bdevs": 4, 00:09:37.476 "num_base_bdevs_discovered": 4, 00:09:37.476 "num_base_bdevs_operational": 4, 00:09:37.476 "base_bdevs_list": [ 00:09:37.476 { 00:09:37.476 "name": "BaseBdev1", 00:09:37.476 "uuid": "7e94543f-9959-4020-9075-9b3cab3608fd", 00:09:37.476 "is_configured": true, 00:09:37.476 "data_offset": 0, 00:09:37.476 "data_size": 65536 00:09:37.476 }, 00:09:37.476 { 00:09:37.476 "name": "BaseBdev2", 00:09:37.476 "uuid": "2346379d-84b0-4c13-9421-650464d68e5c", 00:09:37.476 "is_configured": true, 00:09:37.476 "data_offset": 0, 00:09:37.476 "data_size": 65536 00:09:37.476 }, 00:09:37.476 { 00:09:37.476 "name": "BaseBdev3", 00:09:37.476 "uuid": "967bc60b-efde-42a9-8025-76b3c478fb53", 00:09:37.476 "is_configured": true, 00:09:37.476 "data_offset": 0, 00:09:37.476 "data_size": 65536 00:09:37.476 }, 00:09:37.476 { 00:09:37.476 "name": "BaseBdev4", 00:09:37.476 "uuid": "3e174327-483a-4783-b751-799fb47975d0", 00:09:37.476 "is_configured": true, 00:09:37.476 "data_offset": 0, 00:09:37.476 "data_size": 65536 00:09:37.476 } 00:09:37.476 ] 00:09:37.476 }' 00:09:37.476 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.476 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.045 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:38.045 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:38.045 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:38.045 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:38.045 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:38.045 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:38.045 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:38.045 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.045 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.045 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:38.045 [2024-12-14 04:58:48.654560] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.045 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.045 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:38.045 "name": "Existed_Raid", 00:09:38.046 "aliases": [ 00:09:38.046 "c45edfc5-0384-48cc-af5c-e93e7976ddf8" 00:09:38.046 ], 00:09:38.046 "product_name": "Raid Volume", 00:09:38.046 "block_size": 512, 00:09:38.046 "num_blocks": 262144, 00:09:38.046 "uuid": "c45edfc5-0384-48cc-af5c-e93e7976ddf8", 00:09:38.046 "assigned_rate_limits": { 00:09:38.046 "rw_ios_per_sec": 0, 00:09:38.046 "rw_mbytes_per_sec": 0, 00:09:38.046 "r_mbytes_per_sec": 0, 00:09:38.046 "w_mbytes_per_sec": 0 00:09:38.046 }, 00:09:38.046 "claimed": false, 00:09:38.046 "zoned": false, 00:09:38.046 "supported_io_types": { 00:09:38.046 "read": true, 00:09:38.046 "write": true, 00:09:38.046 "unmap": true, 00:09:38.046 "flush": true, 00:09:38.046 "reset": true, 00:09:38.046 "nvme_admin": false, 00:09:38.046 "nvme_io": false, 00:09:38.046 "nvme_io_md": false, 00:09:38.046 "write_zeroes": true, 00:09:38.046 "zcopy": false, 00:09:38.046 "get_zone_info": false, 00:09:38.046 "zone_management": false, 00:09:38.046 "zone_append": false, 00:09:38.046 "compare": false, 00:09:38.046 "compare_and_write": false, 00:09:38.046 "abort": false, 00:09:38.046 "seek_hole": false, 00:09:38.046 "seek_data": false, 00:09:38.046 "copy": false, 00:09:38.046 "nvme_iov_md": false 00:09:38.046 }, 00:09:38.046 "memory_domains": [ 00:09:38.046 { 00:09:38.046 "dma_device_id": "system", 00:09:38.046 "dma_device_type": 1 00:09:38.046 }, 00:09:38.046 { 00:09:38.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.046 "dma_device_type": 2 00:09:38.046 }, 00:09:38.046 { 00:09:38.046 "dma_device_id": "system", 00:09:38.046 "dma_device_type": 1 00:09:38.046 }, 00:09:38.046 { 00:09:38.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.046 "dma_device_type": 2 00:09:38.046 }, 00:09:38.046 { 00:09:38.046 "dma_device_id": "system", 00:09:38.046 "dma_device_type": 1 00:09:38.046 }, 00:09:38.046 { 00:09:38.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.046 "dma_device_type": 2 00:09:38.046 }, 00:09:38.046 { 00:09:38.046 "dma_device_id": "system", 00:09:38.046 "dma_device_type": 1 00:09:38.046 }, 00:09:38.046 { 00:09:38.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.046 "dma_device_type": 2 00:09:38.046 } 00:09:38.046 ], 00:09:38.046 "driver_specific": { 00:09:38.046 "raid": { 00:09:38.046 "uuid": "c45edfc5-0384-48cc-af5c-e93e7976ddf8", 00:09:38.046 "strip_size_kb": 64, 00:09:38.046 "state": "online", 00:09:38.046 "raid_level": "concat", 00:09:38.046 "superblock": false, 00:09:38.046 "num_base_bdevs": 4, 00:09:38.046 "num_base_bdevs_discovered": 4, 00:09:38.046 "num_base_bdevs_operational": 4, 00:09:38.046 "base_bdevs_list": [ 00:09:38.046 { 00:09:38.046 "name": "BaseBdev1", 00:09:38.046 "uuid": "7e94543f-9959-4020-9075-9b3cab3608fd", 00:09:38.046 "is_configured": true, 00:09:38.046 "data_offset": 0, 00:09:38.046 "data_size": 65536 00:09:38.046 }, 00:09:38.046 { 00:09:38.046 "name": "BaseBdev2", 00:09:38.046 "uuid": "2346379d-84b0-4c13-9421-650464d68e5c", 00:09:38.046 "is_configured": true, 00:09:38.046 "data_offset": 0, 00:09:38.046 "data_size": 65536 00:09:38.046 }, 00:09:38.046 { 00:09:38.046 "name": "BaseBdev3", 00:09:38.046 "uuid": "967bc60b-efde-42a9-8025-76b3c478fb53", 00:09:38.046 "is_configured": true, 00:09:38.046 "data_offset": 0, 00:09:38.046 "data_size": 65536 00:09:38.046 }, 00:09:38.046 { 00:09:38.046 "name": "BaseBdev4", 00:09:38.046 "uuid": "3e174327-483a-4783-b751-799fb47975d0", 00:09:38.046 "is_configured": true, 00:09:38.046 "data_offset": 0, 00:09:38.046 "data_size": 65536 00:09:38.046 } 00:09:38.046 ] 00:09:38.046 } 00:09:38.046 } 00:09:38.046 }' 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:38.046 BaseBdev2 00:09:38.046 BaseBdev3 00:09:38.046 BaseBdev4' 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.046 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.305 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.305 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.305 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.306 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.306 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:38.306 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.306 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.306 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.306 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.306 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.306 04:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:38.306 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.306 04:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.306 [2024-12-14 04:58:48.989662] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:38.306 [2024-12-14 04:58:48.989735] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:38.306 [2024-12-14 04:58:48.989812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.306 "name": "Existed_Raid", 00:09:38.306 "uuid": "c45edfc5-0384-48cc-af5c-e93e7976ddf8", 00:09:38.306 "strip_size_kb": 64, 00:09:38.306 "state": "offline", 00:09:38.306 "raid_level": "concat", 00:09:38.306 "superblock": false, 00:09:38.306 "num_base_bdevs": 4, 00:09:38.306 "num_base_bdevs_discovered": 3, 00:09:38.306 "num_base_bdevs_operational": 3, 00:09:38.306 "base_bdevs_list": [ 00:09:38.306 { 00:09:38.306 "name": null, 00:09:38.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.306 "is_configured": false, 00:09:38.306 "data_offset": 0, 00:09:38.306 "data_size": 65536 00:09:38.306 }, 00:09:38.306 { 00:09:38.306 "name": "BaseBdev2", 00:09:38.306 "uuid": "2346379d-84b0-4c13-9421-650464d68e5c", 00:09:38.306 "is_configured": true, 00:09:38.306 "data_offset": 0, 00:09:38.306 "data_size": 65536 00:09:38.306 }, 00:09:38.306 { 00:09:38.306 "name": "BaseBdev3", 00:09:38.306 "uuid": "967bc60b-efde-42a9-8025-76b3c478fb53", 00:09:38.306 "is_configured": true, 00:09:38.306 "data_offset": 0, 00:09:38.306 "data_size": 65536 00:09:38.306 }, 00:09:38.306 { 00:09:38.306 "name": "BaseBdev4", 00:09:38.306 "uuid": "3e174327-483a-4783-b751-799fb47975d0", 00:09:38.306 "is_configured": true, 00:09:38.306 "data_offset": 0, 00:09:38.306 "data_size": 65536 00:09:38.306 } 00:09:38.306 ] 00:09:38.306 }' 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.306 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.875 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:38.875 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:38.875 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.875 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.875 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.875 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:38.875 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.875 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:38.875 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:38.875 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:38.875 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.875 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.875 [2024-12-14 04:58:49.536018] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.876 [2024-12-14 04:58:49.606955] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.876 [2024-12-14 04:58:49.678097] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:38.876 [2024-12-14 04:58:49.678217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.876 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.137 BaseBdev2 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.137 [ 00:09:39.137 { 00:09:39.137 "name": "BaseBdev2", 00:09:39.137 "aliases": [ 00:09:39.137 "6ab16949-1a17-448e-84d5-a37dcd110f09" 00:09:39.137 ], 00:09:39.137 "product_name": "Malloc disk", 00:09:39.137 "block_size": 512, 00:09:39.137 "num_blocks": 65536, 00:09:39.137 "uuid": "6ab16949-1a17-448e-84d5-a37dcd110f09", 00:09:39.137 "assigned_rate_limits": { 00:09:39.137 "rw_ios_per_sec": 0, 00:09:39.137 "rw_mbytes_per_sec": 0, 00:09:39.137 "r_mbytes_per_sec": 0, 00:09:39.137 "w_mbytes_per_sec": 0 00:09:39.137 }, 00:09:39.137 "claimed": false, 00:09:39.137 "zoned": false, 00:09:39.137 "supported_io_types": { 00:09:39.137 "read": true, 00:09:39.137 "write": true, 00:09:39.137 "unmap": true, 00:09:39.137 "flush": true, 00:09:39.137 "reset": true, 00:09:39.137 "nvme_admin": false, 00:09:39.137 "nvme_io": false, 00:09:39.137 "nvme_io_md": false, 00:09:39.137 "write_zeroes": true, 00:09:39.137 "zcopy": true, 00:09:39.137 "get_zone_info": false, 00:09:39.137 "zone_management": false, 00:09:39.137 "zone_append": false, 00:09:39.137 "compare": false, 00:09:39.137 "compare_and_write": false, 00:09:39.137 "abort": true, 00:09:39.137 "seek_hole": false, 00:09:39.137 "seek_data": false, 00:09:39.137 "copy": true, 00:09:39.137 "nvme_iov_md": false 00:09:39.137 }, 00:09:39.137 "memory_domains": [ 00:09:39.137 { 00:09:39.137 "dma_device_id": "system", 00:09:39.137 "dma_device_type": 1 00:09:39.137 }, 00:09:39.137 { 00:09:39.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.137 "dma_device_type": 2 00:09:39.137 } 00:09:39.137 ], 00:09:39.137 "driver_specific": {} 00:09:39.137 } 00:09:39.137 ] 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.137 BaseBdev3 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.137 [ 00:09:39.137 { 00:09:39.137 "name": "BaseBdev3", 00:09:39.137 "aliases": [ 00:09:39.137 "1a1cf5c8-cdaf-4f03-8167-a3a43c8a71de" 00:09:39.137 ], 00:09:39.137 "product_name": "Malloc disk", 00:09:39.137 "block_size": 512, 00:09:39.137 "num_blocks": 65536, 00:09:39.137 "uuid": "1a1cf5c8-cdaf-4f03-8167-a3a43c8a71de", 00:09:39.137 "assigned_rate_limits": { 00:09:39.137 "rw_ios_per_sec": 0, 00:09:39.137 "rw_mbytes_per_sec": 0, 00:09:39.137 "r_mbytes_per_sec": 0, 00:09:39.137 "w_mbytes_per_sec": 0 00:09:39.137 }, 00:09:39.137 "claimed": false, 00:09:39.137 "zoned": false, 00:09:39.137 "supported_io_types": { 00:09:39.137 "read": true, 00:09:39.137 "write": true, 00:09:39.137 "unmap": true, 00:09:39.137 "flush": true, 00:09:39.137 "reset": true, 00:09:39.137 "nvme_admin": false, 00:09:39.137 "nvme_io": false, 00:09:39.137 "nvme_io_md": false, 00:09:39.137 "write_zeroes": true, 00:09:39.137 "zcopy": true, 00:09:39.137 "get_zone_info": false, 00:09:39.137 "zone_management": false, 00:09:39.137 "zone_append": false, 00:09:39.137 "compare": false, 00:09:39.137 "compare_and_write": false, 00:09:39.137 "abort": true, 00:09:39.137 "seek_hole": false, 00:09:39.137 "seek_data": false, 00:09:39.137 "copy": true, 00:09:39.137 "nvme_iov_md": false 00:09:39.137 }, 00:09:39.137 "memory_domains": [ 00:09:39.137 { 00:09:39.137 "dma_device_id": "system", 00:09:39.137 "dma_device_type": 1 00:09:39.137 }, 00:09:39.137 { 00:09:39.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.137 "dma_device_type": 2 00:09:39.137 } 00:09:39.137 ], 00:09:39.137 "driver_specific": {} 00:09:39.137 } 00:09:39.137 ] 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.137 BaseBdev4 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.137 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.137 [ 00:09:39.137 { 00:09:39.137 "name": "BaseBdev4", 00:09:39.137 "aliases": [ 00:09:39.137 "4e421dfd-10d7-4b05-888a-b2d67b82211b" 00:09:39.137 ], 00:09:39.137 "product_name": "Malloc disk", 00:09:39.137 "block_size": 512, 00:09:39.137 "num_blocks": 65536, 00:09:39.137 "uuid": "4e421dfd-10d7-4b05-888a-b2d67b82211b", 00:09:39.138 "assigned_rate_limits": { 00:09:39.138 "rw_ios_per_sec": 0, 00:09:39.138 "rw_mbytes_per_sec": 0, 00:09:39.138 "r_mbytes_per_sec": 0, 00:09:39.138 "w_mbytes_per_sec": 0 00:09:39.138 }, 00:09:39.138 "claimed": false, 00:09:39.138 "zoned": false, 00:09:39.138 "supported_io_types": { 00:09:39.138 "read": true, 00:09:39.138 "write": true, 00:09:39.138 "unmap": true, 00:09:39.138 "flush": true, 00:09:39.138 "reset": true, 00:09:39.138 "nvme_admin": false, 00:09:39.138 "nvme_io": false, 00:09:39.138 "nvme_io_md": false, 00:09:39.138 "write_zeroes": true, 00:09:39.138 "zcopy": true, 00:09:39.138 "get_zone_info": false, 00:09:39.138 "zone_management": false, 00:09:39.138 "zone_append": false, 00:09:39.138 "compare": false, 00:09:39.138 "compare_and_write": false, 00:09:39.138 "abort": true, 00:09:39.138 "seek_hole": false, 00:09:39.138 "seek_data": false, 00:09:39.138 "copy": true, 00:09:39.138 "nvme_iov_md": false 00:09:39.138 }, 00:09:39.138 "memory_domains": [ 00:09:39.138 { 00:09:39.138 "dma_device_id": "system", 00:09:39.138 "dma_device_type": 1 00:09:39.138 }, 00:09:39.138 { 00:09:39.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.138 "dma_device_type": 2 00:09:39.138 } 00:09:39.138 ], 00:09:39.138 "driver_specific": {} 00:09:39.138 } 00:09:39.138 ] 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.138 [2024-12-14 04:58:49.908933] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.138 [2024-12-14 04:58:49.909031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.138 [2024-12-14 04:58:49.909073] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:39.138 [2024-12-14 04:58:49.910869] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.138 [2024-12-14 04:58:49.910961] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.138 "name": "Existed_Raid", 00:09:39.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.138 "strip_size_kb": 64, 00:09:39.138 "state": "configuring", 00:09:39.138 "raid_level": "concat", 00:09:39.138 "superblock": false, 00:09:39.138 "num_base_bdevs": 4, 00:09:39.138 "num_base_bdevs_discovered": 3, 00:09:39.138 "num_base_bdevs_operational": 4, 00:09:39.138 "base_bdevs_list": [ 00:09:39.138 { 00:09:39.138 "name": "BaseBdev1", 00:09:39.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.138 "is_configured": false, 00:09:39.138 "data_offset": 0, 00:09:39.138 "data_size": 0 00:09:39.138 }, 00:09:39.138 { 00:09:39.138 "name": "BaseBdev2", 00:09:39.138 "uuid": "6ab16949-1a17-448e-84d5-a37dcd110f09", 00:09:39.138 "is_configured": true, 00:09:39.138 "data_offset": 0, 00:09:39.138 "data_size": 65536 00:09:39.138 }, 00:09:39.138 { 00:09:39.138 "name": "BaseBdev3", 00:09:39.138 "uuid": "1a1cf5c8-cdaf-4f03-8167-a3a43c8a71de", 00:09:39.138 "is_configured": true, 00:09:39.138 "data_offset": 0, 00:09:39.138 "data_size": 65536 00:09:39.138 }, 00:09:39.138 { 00:09:39.138 "name": "BaseBdev4", 00:09:39.138 "uuid": "4e421dfd-10d7-4b05-888a-b2d67b82211b", 00:09:39.138 "is_configured": true, 00:09:39.138 "data_offset": 0, 00:09:39.138 "data_size": 65536 00:09:39.138 } 00:09:39.138 ] 00:09:39.138 }' 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.138 04:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.707 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:39.707 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.707 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.707 [2024-12-14 04:58:50.324274] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:39.707 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.707 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:39.707 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.707 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.707 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.707 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.707 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.707 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.707 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.707 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.707 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.707 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.707 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.707 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.707 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.707 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.707 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.707 "name": "Existed_Raid", 00:09:39.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.707 "strip_size_kb": 64, 00:09:39.707 "state": "configuring", 00:09:39.707 "raid_level": "concat", 00:09:39.707 "superblock": false, 00:09:39.707 "num_base_bdevs": 4, 00:09:39.707 "num_base_bdevs_discovered": 2, 00:09:39.707 "num_base_bdevs_operational": 4, 00:09:39.707 "base_bdevs_list": [ 00:09:39.707 { 00:09:39.707 "name": "BaseBdev1", 00:09:39.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.707 "is_configured": false, 00:09:39.707 "data_offset": 0, 00:09:39.707 "data_size": 0 00:09:39.707 }, 00:09:39.707 { 00:09:39.707 "name": null, 00:09:39.707 "uuid": "6ab16949-1a17-448e-84d5-a37dcd110f09", 00:09:39.707 "is_configured": false, 00:09:39.707 "data_offset": 0, 00:09:39.707 "data_size": 65536 00:09:39.707 }, 00:09:39.707 { 00:09:39.707 "name": "BaseBdev3", 00:09:39.707 "uuid": "1a1cf5c8-cdaf-4f03-8167-a3a43c8a71de", 00:09:39.707 "is_configured": true, 00:09:39.707 "data_offset": 0, 00:09:39.707 "data_size": 65536 00:09:39.707 }, 00:09:39.707 { 00:09:39.707 "name": "BaseBdev4", 00:09:39.707 "uuid": "4e421dfd-10d7-4b05-888a-b2d67b82211b", 00:09:39.707 "is_configured": true, 00:09:39.707 "data_offset": 0, 00:09:39.707 "data_size": 65536 00:09:39.707 } 00:09:39.707 ] 00:09:39.707 }' 00:09:39.707 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.707 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.967 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.967 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.967 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:39.967 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.967 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.967 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:39.967 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:39.967 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.967 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.967 [2024-12-14 04:58:50.846275] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:39.967 BaseBdev1 00:09:39.967 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.967 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:40.227 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:40.227 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:40.227 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:40.227 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:40.227 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:40.227 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:40.227 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.227 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.227 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.227 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:40.227 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.227 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.227 [ 00:09:40.227 { 00:09:40.227 "name": "BaseBdev1", 00:09:40.227 "aliases": [ 00:09:40.227 "378f62d7-caae-4d86-8e98-fcf24cd73a31" 00:09:40.227 ], 00:09:40.227 "product_name": "Malloc disk", 00:09:40.227 "block_size": 512, 00:09:40.227 "num_blocks": 65536, 00:09:40.227 "uuid": "378f62d7-caae-4d86-8e98-fcf24cd73a31", 00:09:40.227 "assigned_rate_limits": { 00:09:40.227 "rw_ios_per_sec": 0, 00:09:40.227 "rw_mbytes_per_sec": 0, 00:09:40.227 "r_mbytes_per_sec": 0, 00:09:40.227 "w_mbytes_per_sec": 0 00:09:40.227 }, 00:09:40.227 "claimed": true, 00:09:40.227 "claim_type": "exclusive_write", 00:09:40.227 "zoned": false, 00:09:40.227 "supported_io_types": { 00:09:40.227 "read": true, 00:09:40.227 "write": true, 00:09:40.227 "unmap": true, 00:09:40.227 "flush": true, 00:09:40.227 "reset": true, 00:09:40.227 "nvme_admin": false, 00:09:40.227 "nvme_io": false, 00:09:40.227 "nvme_io_md": false, 00:09:40.227 "write_zeroes": true, 00:09:40.227 "zcopy": true, 00:09:40.227 "get_zone_info": false, 00:09:40.227 "zone_management": false, 00:09:40.227 "zone_append": false, 00:09:40.227 "compare": false, 00:09:40.227 "compare_and_write": false, 00:09:40.227 "abort": true, 00:09:40.227 "seek_hole": false, 00:09:40.227 "seek_data": false, 00:09:40.227 "copy": true, 00:09:40.227 "nvme_iov_md": false 00:09:40.227 }, 00:09:40.227 "memory_domains": [ 00:09:40.227 { 00:09:40.227 "dma_device_id": "system", 00:09:40.227 "dma_device_type": 1 00:09:40.227 }, 00:09:40.227 { 00:09:40.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.227 "dma_device_type": 2 00:09:40.227 } 00:09:40.227 ], 00:09:40.227 "driver_specific": {} 00:09:40.227 } 00:09:40.227 ] 00:09:40.227 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.227 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:40.227 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:40.227 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.227 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.227 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.227 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.227 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.227 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.228 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.228 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.228 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.228 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.228 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.228 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.228 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.228 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.228 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.228 "name": "Existed_Raid", 00:09:40.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.228 "strip_size_kb": 64, 00:09:40.228 "state": "configuring", 00:09:40.228 "raid_level": "concat", 00:09:40.228 "superblock": false, 00:09:40.228 "num_base_bdevs": 4, 00:09:40.228 "num_base_bdevs_discovered": 3, 00:09:40.228 "num_base_bdevs_operational": 4, 00:09:40.228 "base_bdevs_list": [ 00:09:40.228 { 00:09:40.228 "name": "BaseBdev1", 00:09:40.228 "uuid": "378f62d7-caae-4d86-8e98-fcf24cd73a31", 00:09:40.228 "is_configured": true, 00:09:40.228 "data_offset": 0, 00:09:40.228 "data_size": 65536 00:09:40.228 }, 00:09:40.228 { 00:09:40.228 "name": null, 00:09:40.228 "uuid": "6ab16949-1a17-448e-84d5-a37dcd110f09", 00:09:40.228 "is_configured": false, 00:09:40.228 "data_offset": 0, 00:09:40.228 "data_size": 65536 00:09:40.228 }, 00:09:40.228 { 00:09:40.228 "name": "BaseBdev3", 00:09:40.228 "uuid": "1a1cf5c8-cdaf-4f03-8167-a3a43c8a71de", 00:09:40.228 "is_configured": true, 00:09:40.228 "data_offset": 0, 00:09:40.228 "data_size": 65536 00:09:40.228 }, 00:09:40.228 { 00:09:40.228 "name": "BaseBdev4", 00:09:40.228 "uuid": "4e421dfd-10d7-4b05-888a-b2d67b82211b", 00:09:40.228 "is_configured": true, 00:09:40.228 "data_offset": 0, 00:09:40.228 "data_size": 65536 00:09:40.228 } 00:09:40.228 ] 00:09:40.228 }' 00:09:40.228 04:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.228 04:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.488 [2024-12-14 04:58:51.233643] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.488 "name": "Existed_Raid", 00:09:40.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.488 "strip_size_kb": 64, 00:09:40.488 "state": "configuring", 00:09:40.488 "raid_level": "concat", 00:09:40.488 "superblock": false, 00:09:40.488 "num_base_bdevs": 4, 00:09:40.488 "num_base_bdevs_discovered": 2, 00:09:40.488 "num_base_bdevs_operational": 4, 00:09:40.488 "base_bdevs_list": [ 00:09:40.488 { 00:09:40.488 "name": "BaseBdev1", 00:09:40.488 "uuid": "378f62d7-caae-4d86-8e98-fcf24cd73a31", 00:09:40.488 "is_configured": true, 00:09:40.488 "data_offset": 0, 00:09:40.488 "data_size": 65536 00:09:40.488 }, 00:09:40.488 { 00:09:40.488 "name": null, 00:09:40.488 "uuid": "6ab16949-1a17-448e-84d5-a37dcd110f09", 00:09:40.488 "is_configured": false, 00:09:40.488 "data_offset": 0, 00:09:40.488 "data_size": 65536 00:09:40.488 }, 00:09:40.488 { 00:09:40.488 "name": null, 00:09:40.488 "uuid": "1a1cf5c8-cdaf-4f03-8167-a3a43c8a71de", 00:09:40.488 "is_configured": false, 00:09:40.488 "data_offset": 0, 00:09:40.488 "data_size": 65536 00:09:40.488 }, 00:09:40.488 { 00:09:40.488 "name": "BaseBdev4", 00:09:40.488 "uuid": "4e421dfd-10d7-4b05-888a-b2d67b82211b", 00:09:40.488 "is_configured": true, 00:09:40.488 "data_offset": 0, 00:09:40.488 "data_size": 65536 00:09:40.488 } 00:09:40.488 ] 00:09:40.488 }' 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.488 04:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.057 [2024-12-14 04:58:51.740809] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.057 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.057 "name": "Existed_Raid", 00:09:41.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.058 "strip_size_kb": 64, 00:09:41.058 "state": "configuring", 00:09:41.058 "raid_level": "concat", 00:09:41.058 "superblock": false, 00:09:41.058 "num_base_bdevs": 4, 00:09:41.058 "num_base_bdevs_discovered": 3, 00:09:41.058 "num_base_bdevs_operational": 4, 00:09:41.058 "base_bdevs_list": [ 00:09:41.058 { 00:09:41.058 "name": "BaseBdev1", 00:09:41.058 "uuid": "378f62d7-caae-4d86-8e98-fcf24cd73a31", 00:09:41.058 "is_configured": true, 00:09:41.058 "data_offset": 0, 00:09:41.058 "data_size": 65536 00:09:41.058 }, 00:09:41.058 { 00:09:41.058 "name": null, 00:09:41.058 "uuid": "6ab16949-1a17-448e-84d5-a37dcd110f09", 00:09:41.058 "is_configured": false, 00:09:41.058 "data_offset": 0, 00:09:41.058 "data_size": 65536 00:09:41.058 }, 00:09:41.058 { 00:09:41.058 "name": "BaseBdev3", 00:09:41.058 "uuid": "1a1cf5c8-cdaf-4f03-8167-a3a43c8a71de", 00:09:41.058 "is_configured": true, 00:09:41.058 "data_offset": 0, 00:09:41.058 "data_size": 65536 00:09:41.058 }, 00:09:41.058 { 00:09:41.058 "name": "BaseBdev4", 00:09:41.058 "uuid": "4e421dfd-10d7-4b05-888a-b2d67b82211b", 00:09:41.058 "is_configured": true, 00:09:41.058 "data_offset": 0, 00:09:41.058 "data_size": 65536 00:09:41.058 } 00:09:41.058 ] 00:09:41.058 }' 00:09:41.058 04:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.058 04:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.317 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.317 04:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.317 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:41.317 04:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.317 04:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.317 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:41.317 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:41.317 04:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.317 04:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.577 [2024-12-14 04:58:52.200049] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:41.577 04:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.577 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:41.577 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.577 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.577 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.577 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.577 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.577 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.577 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.577 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.577 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.577 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.577 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.577 04:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.577 04:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.577 04:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.577 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.577 "name": "Existed_Raid", 00:09:41.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.577 "strip_size_kb": 64, 00:09:41.577 "state": "configuring", 00:09:41.577 "raid_level": "concat", 00:09:41.577 "superblock": false, 00:09:41.577 "num_base_bdevs": 4, 00:09:41.577 "num_base_bdevs_discovered": 2, 00:09:41.577 "num_base_bdevs_operational": 4, 00:09:41.577 "base_bdevs_list": [ 00:09:41.577 { 00:09:41.577 "name": null, 00:09:41.577 "uuid": "378f62d7-caae-4d86-8e98-fcf24cd73a31", 00:09:41.577 "is_configured": false, 00:09:41.577 "data_offset": 0, 00:09:41.577 "data_size": 65536 00:09:41.577 }, 00:09:41.577 { 00:09:41.577 "name": null, 00:09:41.577 "uuid": "6ab16949-1a17-448e-84d5-a37dcd110f09", 00:09:41.577 "is_configured": false, 00:09:41.577 "data_offset": 0, 00:09:41.577 "data_size": 65536 00:09:41.577 }, 00:09:41.577 { 00:09:41.577 "name": "BaseBdev3", 00:09:41.577 "uuid": "1a1cf5c8-cdaf-4f03-8167-a3a43c8a71de", 00:09:41.577 "is_configured": true, 00:09:41.577 "data_offset": 0, 00:09:41.577 "data_size": 65536 00:09:41.577 }, 00:09:41.577 { 00:09:41.577 "name": "BaseBdev4", 00:09:41.577 "uuid": "4e421dfd-10d7-4b05-888a-b2d67b82211b", 00:09:41.577 "is_configured": true, 00:09:41.577 "data_offset": 0, 00:09:41.577 "data_size": 65536 00:09:41.577 } 00:09:41.577 ] 00:09:41.577 }' 00:09:41.577 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.577 04:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.836 [2024-12-14 04:58:52.665690] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.836 04:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.094 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.094 "name": "Existed_Raid", 00:09:42.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.094 "strip_size_kb": 64, 00:09:42.094 "state": "configuring", 00:09:42.094 "raid_level": "concat", 00:09:42.094 "superblock": false, 00:09:42.094 "num_base_bdevs": 4, 00:09:42.094 "num_base_bdevs_discovered": 3, 00:09:42.094 "num_base_bdevs_operational": 4, 00:09:42.094 "base_bdevs_list": [ 00:09:42.094 { 00:09:42.094 "name": null, 00:09:42.094 "uuid": "378f62d7-caae-4d86-8e98-fcf24cd73a31", 00:09:42.094 "is_configured": false, 00:09:42.094 "data_offset": 0, 00:09:42.094 "data_size": 65536 00:09:42.094 }, 00:09:42.094 { 00:09:42.094 "name": "BaseBdev2", 00:09:42.094 "uuid": "6ab16949-1a17-448e-84d5-a37dcd110f09", 00:09:42.094 "is_configured": true, 00:09:42.094 "data_offset": 0, 00:09:42.094 "data_size": 65536 00:09:42.094 }, 00:09:42.094 { 00:09:42.094 "name": "BaseBdev3", 00:09:42.094 "uuid": "1a1cf5c8-cdaf-4f03-8167-a3a43c8a71de", 00:09:42.094 "is_configured": true, 00:09:42.094 "data_offset": 0, 00:09:42.094 "data_size": 65536 00:09:42.094 }, 00:09:42.094 { 00:09:42.094 "name": "BaseBdev4", 00:09:42.094 "uuid": "4e421dfd-10d7-4b05-888a-b2d67b82211b", 00:09:42.094 "is_configured": true, 00:09:42.094 "data_offset": 0, 00:09:42.094 "data_size": 65536 00:09:42.094 } 00:09:42.094 ] 00:09:42.094 }' 00:09:42.094 04:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.094 04:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.354 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.354 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:42.354 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.354 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.354 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.354 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:42.354 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.354 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:42.354 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.354 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.354 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 378f62d7-caae-4d86-8e98-fcf24cd73a31 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.613 [2024-12-14 04:58:53.251696] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:42.613 [2024-12-14 04:58:53.251823] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:42.613 [2024-12-14 04:58:53.251836] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:42.613 [2024-12-14 04:58:53.252133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:42.613 [2024-12-14 04:58:53.252283] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:42.613 [2024-12-14 04:58:53.252299] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:42.613 [2024-12-14 04:58:53.252497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.613 NewBaseBdev 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.613 [ 00:09:42.613 { 00:09:42.613 "name": "NewBaseBdev", 00:09:42.613 "aliases": [ 00:09:42.613 "378f62d7-caae-4d86-8e98-fcf24cd73a31" 00:09:42.613 ], 00:09:42.613 "product_name": "Malloc disk", 00:09:42.613 "block_size": 512, 00:09:42.613 "num_blocks": 65536, 00:09:42.613 "uuid": "378f62d7-caae-4d86-8e98-fcf24cd73a31", 00:09:42.613 "assigned_rate_limits": { 00:09:42.613 "rw_ios_per_sec": 0, 00:09:42.613 "rw_mbytes_per_sec": 0, 00:09:42.613 "r_mbytes_per_sec": 0, 00:09:42.613 "w_mbytes_per_sec": 0 00:09:42.613 }, 00:09:42.613 "claimed": true, 00:09:42.613 "claim_type": "exclusive_write", 00:09:42.613 "zoned": false, 00:09:42.613 "supported_io_types": { 00:09:42.613 "read": true, 00:09:42.613 "write": true, 00:09:42.613 "unmap": true, 00:09:42.613 "flush": true, 00:09:42.613 "reset": true, 00:09:42.613 "nvme_admin": false, 00:09:42.613 "nvme_io": false, 00:09:42.613 "nvme_io_md": false, 00:09:42.613 "write_zeroes": true, 00:09:42.613 "zcopy": true, 00:09:42.613 "get_zone_info": false, 00:09:42.613 "zone_management": false, 00:09:42.613 "zone_append": false, 00:09:42.613 "compare": false, 00:09:42.613 "compare_and_write": false, 00:09:42.613 "abort": true, 00:09:42.613 "seek_hole": false, 00:09:42.613 "seek_data": false, 00:09:42.613 "copy": true, 00:09:42.613 "nvme_iov_md": false 00:09:42.613 }, 00:09:42.613 "memory_domains": [ 00:09:42.613 { 00:09:42.613 "dma_device_id": "system", 00:09:42.613 "dma_device_type": 1 00:09:42.613 }, 00:09:42.613 { 00:09:42.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.613 "dma_device_type": 2 00:09:42.613 } 00:09:42.613 ], 00:09:42.613 "driver_specific": {} 00:09:42.613 } 00:09:42.613 ] 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.613 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.614 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.614 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.614 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.614 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.614 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.614 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.614 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.614 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.614 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.614 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.614 "name": "Existed_Raid", 00:09:42.614 "uuid": "72910161-1445-425e-85ae-11516789915c", 00:09:42.614 "strip_size_kb": 64, 00:09:42.614 "state": "online", 00:09:42.614 "raid_level": "concat", 00:09:42.614 "superblock": false, 00:09:42.614 "num_base_bdevs": 4, 00:09:42.614 "num_base_bdevs_discovered": 4, 00:09:42.614 "num_base_bdevs_operational": 4, 00:09:42.614 "base_bdevs_list": [ 00:09:42.614 { 00:09:42.614 "name": "NewBaseBdev", 00:09:42.614 "uuid": "378f62d7-caae-4d86-8e98-fcf24cd73a31", 00:09:42.614 "is_configured": true, 00:09:42.614 "data_offset": 0, 00:09:42.614 "data_size": 65536 00:09:42.614 }, 00:09:42.614 { 00:09:42.614 "name": "BaseBdev2", 00:09:42.614 "uuid": "6ab16949-1a17-448e-84d5-a37dcd110f09", 00:09:42.614 "is_configured": true, 00:09:42.614 "data_offset": 0, 00:09:42.614 "data_size": 65536 00:09:42.614 }, 00:09:42.614 { 00:09:42.614 "name": "BaseBdev3", 00:09:42.614 "uuid": "1a1cf5c8-cdaf-4f03-8167-a3a43c8a71de", 00:09:42.614 "is_configured": true, 00:09:42.614 "data_offset": 0, 00:09:42.614 "data_size": 65536 00:09:42.614 }, 00:09:42.614 { 00:09:42.614 "name": "BaseBdev4", 00:09:42.614 "uuid": "4e421dfd-10d7-4b05-888a-b2d67b82211b", 00:09:42.614 "is_configured": true, 00:09:42.614 "data_offset": 0, 00:09:42.614 "data_size": 65536 00:09:42.614 } 00:09:42.614 ] 00:09:42.614 }' 00:09:42.614 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.614 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.873 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:42.873 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:42.873 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:42.873 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:42.873 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:42.873 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:42.873 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:42.873 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:42.873 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.873 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.873 [2024-12-14 04:58:53.719241] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.873 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.133 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:43.133 "name": "Existed_Raid", 00:09:43.133 "aliases": [ 00:09:43.133 "72910161-1445-425e-85ae-11516789915c" 00:09:43.133 ], 00:09:43.133 "product_name": "Raid Volume", 00:09:43.133 "block_size": 512, 00:09:43.133 "num_blocks": 262144, 00:09:43.133 "uuid": "72910161-1445-425e-85ae-11516789915c", 00:09:43.133 "assigned_rate_limits": { 00:09:43.133 "rw_ios_per_sec": 0, 00:09:43.133 "rw_mbytes_per_sec": 0, 00:09:43.133 "r_mbytes_per_sec": 0, 00:09:43.133 "w_mbytes_per_sec": 0 00:09:43.133 }, 00:09:43.133 "claimed": false, 00:09:43.133 "zoned": false, 00:09:43.133 "supported_io_types": { 00:09:43.133 "read": true, 00:09:43.133 "write": true, 00:09:43.133 "unmap": true, 00:09:43.133 "flush": true, 00:09:43.133 "reset": true, 00:09:43.133 "nvme_admin": false, 00:09:43.133 "nvme_io": false, 00:09:43.133 "nvme_io_md": false, 00:09:43.133 "write_zeroes": true, 00:09:43.133 "zcopy": false, 00:09:43.133 "get_zone_info": false, 00:09:43.133 "zone_management": false, 00:09:43.133 "zone_append": false, 00:09:43.133 "compare": false, 00:09:43.133 "compare_and_write": false, 00:09:43.133 "abort": false, 00:09:43.133 "seek_hole": false, 00:09:43.133 "seek_data": false, 00:09:43.133 "copy": false, 00:09:43.133 "nvme_iov_md": false 00:09:43.133 }, 00:09:43.133 "memory_domains": [ 00:09:43.133 { 00:09:43.133 "dma_device_id": "system", 00:09:43.133 "dma_device_type": 1 00:09:43.133 }, 00:09:43.133 { 00:09:43.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.133 "dma_device_type": 2 00:09:43.133 }, 00:09:43.133 { 00:09:43.133 "dma_device_id": "system", 00:09:43.133 "dma_device_type": 1 00:09:43.133 }, 00:09:43.133 { 00:09:43.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.133 "dma_device_type": 2 00:09:43.133 }, 00:09:43.133 { 00:09:43.133 "dma_device_id": "system", 00:09:43.133 "dma_device_type": 1 00:09:43.133 }, 00:09:43.133 { 00:09:43.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.133 "dma_device_type": 2 00:09:43.133 }, 00:09:43.133 { 00:09:43.133 "dma_device_id": "system", 00:09:43.133 "dma_device_type": 1 00:09:43.133 }, 00:09:43.133 { 00:09:43.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.133 "dma_device_type": 2 00:09:43.133 } 00:09:43.133 ], 00:09:43.133 "driver_specific": { 00:09:43.133 "raid": { 00:09:43.133 "uuid": "72910161-1445-425e-85ae-11516789915c", 00:09:43.133 "strip_size_kb": 64, 00:09:43.133 "state": "online", 00:09:43.133 "raid_level": "concat", 00:09:43.133 "superblock": false, 00:09:43.133 "num_base_bdevs": 4, 00:09:43.133 "num_base_bdevs_discovered": 4, 00:09:43.133 "num_base_bdevs_operational": 4, 00:09:43.133 "base_bdevs_list": [ 00:09:43.133 { 00:09:43.133 "name": "NewBaseBdev", 00:09:43.133 "uuid": "378f62d7-caae-4d86-8e98-fcf24cd73a31", 00:09:43.133 "is_configured": true, 00:09:43.133 "data_offset": 0, 00:09:43.133 "data_size": 65536 00:09:43.133 }, 00:09:43.133 { 00:09:43.133 "name": "BaseBdev2", 00:09:43.133 "uuid": "6ab16949-1a17-448e-84d5-a37dcd110f09", 00:09:43.133 "is_configured": true, 00:09:43.133 "data_offset": 0, 00:09:43.133 "data_size": 65536 00:09:43.133 }, 00:09:43.133 { 00:09:43.133 "name": "BaseBdev3", 00:09:43.133 "uuid": "1a1cf5c8-cdaf-4f03-8167-a3a43c8a71de", 00:09:43.134 "is_configured": true, 00:09:43.134 "data_offset": 0, 00:09:43.134 "data_size": 65536 00:09:43.134 }, 00:09:43.134 { 00:09:43.134 "name": "BaseBdev4", 00:09:43.134 "uuid": "4e421dfd-10d7-4b05-888a-b2d67b82211b", 00:09:43.134 "is_configured": true, 00:09:43.134 "data_offset": 0, 00:09:43.134 "data_size": 65536 00:09:43.134 } 00:09:43.134 ] 00:09:43.134 } 00:09:43.134 } 00:09:43.134 }' 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:43.134 BaseBdev2 00:09:43.134 BaseBdev3 00:09:43.134 BaseBdev4' 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.134 04:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.134 04:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.134 04:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.134 04:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:43.134 04:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.134 04:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.394 [2024-12-14 04:58:54.018403] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:43.394 [2024-12-14 04:58:54.018432] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:43.394 [2024-12-14 04:58:54.018495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.394 [2024-12-14 04:58:54.018557] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:43.394 [2024-12-14 04:58:54.018575] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:43.394 04:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.394 04:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82172 00:09:43.394 04:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82172 ']' 00:09:43.394 04:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82172 00:09:43.394 04:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:43.394 04:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:43.394 04:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82172 00:09:43.394 killing process with pid 82172 00:09:43.394 04:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:43.394 04:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:43.394 04:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82172' 00:09:43.394 04:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 82172 00:09:43.394 [2024-12-14 04:58:54.062368] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:43.394 04:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 82172 00:09:43.394 [2024-12-14 04:58:54.102118] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:43.653 ************************************ 00:09:43.653 END TEST raid_state_function_test 00:09:43.653 ************************************ 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:43.654 00:09:43.654 real 0m9.509s 00:09:43.654 user 0m16.302s 00:09:43.654 sys 0m1.954s 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.654 04:58:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:09:43.654 04:58:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:43.654 04:58:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.654 04:58:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:43.654 ************************************ 00:09:43.654 START TEST raid_state_function_test_sb 00:09:43.654 ************************************ 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82827 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82827' 00:09:43.654 Process raid pid: 82827 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82827 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82827 ']' 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:43.654 04:58:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.654 [2024-12-14 04:58:54.513632] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:43.654 [2024-12-14 04:58:54.513777] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.914 [2024-12-14 04:58:54.673915] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.914 [2024-12-14 04:58:54.718899] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.914 [2024-12-14 04:58:54.760512] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.914 [2024-12-14 04:58:54.760637] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.483 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:44.483 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:44.483 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:44.483 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.483 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.483 [2024-12-14 04:58:55.338282] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:44.483 [2024-12-14 04:58:55.338326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:44.483 [2024-12-14 04:58:55.338338] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.483 [2024-12-14 04:58:55.338347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.483 [2024-12-14 04:58:55.338354] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:44.483 [2024-12-14 04:58:55.338365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:44.483 [2024-12-14 04:58:55.338371] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:44.483 [2024-12-14 04:58:55.338382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:44.483 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.483 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:44.483 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.483 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.483 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.483 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.483 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.483 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.483 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.483 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.483 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.483 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.483 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.483 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.483 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.741 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.741 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.741 "name": "Existed_Raid", 00:09:44.741 "uuid": "7f395102-73f2-427d-95b2-a3f21b14ee35", 00:09:44.741 "strip_size_kb": 64, 00:09:44.741 "state": "configuring", 00:09:44.741 "raid_level": "concat", 00:09:44.741 "superblock": true, 00:09:44.741 "num_base_bdevs": 4, 00:09:44.742 "num_base_bdevs_discovered": 0, 00:09:44.742 "num_base_bdevs_operational": 4, 00:09:44.742 "base_bdevs_list": [ 00:09:44.742 { 00:09:44.742 "name": "BaseBdev1", 00:09:44.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.742 "is_configured": false, 00:09:44.742 "data_offset": 0, 00:09:44.742 "data_size": 0 00:09:44.742 }, 00:09:44.742 { 00:09:44.742 "name": "BaseBdev2", 00:09:44.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.742 "is_configured": false, 00:09:44.742 "data_offset": 0, 00:09:44.742 "data_size": 0 00:09:44.742 }, 00:09:44.742 { 00:09:44.742 "name": "BaseBdev3", 00:09:44.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.742 "is_configured": false, 00:09:44.742 "data_offset": 0, 00:09:44.742 "data_size": 0 00:09:44.742 }, 00:09:44.742 { 00:09:44.742 "name": "BaseBdev4", 00:09:44.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.742 "is_configured": false, 00:09:44.742 "data_offset": 0, 00:09:44.742 "data_size": 0 00:09:44.742 } 00:09:44.742 ] 00:09:44.742 }' 00:09:44.742 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.742 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.001 [2024-12-14 04:58:55.825317] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.001 [2024-12-14 04:58:55.825403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.001 [2024-12-14 04:58:55.837340] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:45.001 [2024-12-14 04:58:55.837416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:45.001 [2024-12-14 04:58:55.837442] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.001 [2024-12-14 04:58:55.837464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.001 [2024-12-14 04:58:55.837481] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:45.001 [2024-12-14 04:58:55.837501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:45.001 [2024-12-14 04:58:55.837520] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:45.001 [2024-12-14 04:58:55.837548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.001 [2024-12-14 04:58:55.858056] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.001 BaseBdev1 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.001 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.261 [ 00:09:45.261 { 00:09:45.261 "name": "BaseBdev1", 00:09:45.261 "aliases": [ 00:09:45.261 "d46199be-26c8-4e36-8f86-604926ad9e9b" 00:09:45.261 ], 00:09:45.261 "product_name": "Malloc disk", 00:09:45.261 "block_size": 512, 00:09:45.261 "num_blocks": 65536, 00:09:45.261 "uuid": "d46199be-26c8-4e36-8f86-604926ad9e9b", 00:09:45.261 "assigned_rate_limits": { 00:09:45.261 "rw_ios_per_sec": 0, 00:09:45.261 "rw_mbytes_per_sec": 0, 00:09:45.261 "r_mbytes_per_sec": 0, 00:09:45.261 "w_mbytes_per_sec": 0 00:09:45.261 }, 00:09:45.261 "claimed": true, 00:09:45.261 "claim_type": "exclusive_write", 00:09:45.261 "zoned": false, 00:09:45.261 "supported_io_types": { 00:09:45.261 "read": true, 00:09:45.261 "write": true, 00:09:45.261 "unmap": true, 00:09:45.261 "flush": true, 00:09:45.261 "reset": true, 00:09:45.261 "nvme_admin": false, 00:09:45.261 "nvme_io": false, 00:09:45.261 "nvme_io_md": false, 00:09:45.261 "write_zeroes": true, 00:09:45.261 "zcopy": true, 00:09:45.261 "get_zone_info": false, 00:09:45.261 "zone_management": false, 00:09:45.261 "zone_append": false, 00:09:45.261 "compare": false, 00:09:45.261 "compare_and_write": false, 00:09:45.261 "abort": true, 00:09:45.261 "seek_hole": false, 00:09:45.261 "seek_data": false, 00:09:45.261 "copy": true, 00:09:45.261 "nvme_iov_md": false 00:09:45.261 }, 00:09:45.261 "memory_domains": [ 00:09:45.261 { 00:09:45.261 "dma_device_id": "system", 00:09:45.261 "dma_device_type": 1 00:09:45.261 }, 00:09:45.261 { 00:09:45.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.261 "dma_device_type": 2 00:09:45.261 } 00:09:45.261 ], 00:09:45.261 "driver_specific": {} 00:09:45.261 } 00:09:45.261 ] 00:09:45.261 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.261 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:45.261 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:45.261 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.261 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.261 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.261 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.261 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.261 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.261 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.261 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.261 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.261 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.261 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.261 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.261 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.261 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.261 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.261 "name": "Existed_Raid", 00:09:45.261 "uuid": "d82a56af-b2c3-40c5-859a-5ac06607e29e", 00:09:45.261 "strip_size_kb": 64, 00:09:45.261 "state": "configuring", 00:09:45.261 "raid_level": "concat", 00:09:45.261 "superblock": true, 00:09:45.261 "num_base_bdevs": 4, 00:09:45.261 "num_base_bdevs_discovered": 1, 00:09:45.261 "num_base_bdevs_operational": 4, 00:09:45.261 "base_bdevs_list": [ 00:09:45.261 { 00:09:45.261 "name": "BaseBdev1", 00:09:45.261 "uuid": "d46199be-26c8-4e36-8f86-604926ad9e9b", 00:09:45.261 "is_configured": true, 00:09:45.261 "data_offset": 2048, 00:09:45.261 "data_size": 63488 00:09:45.261 }, 00:09:45.261 { 00:09:45.261 "name": "BaseBdev2", 00:09:45.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.261 "is_configured": false, 00:09:45.261 "data_offset": 0, 00:09:45.261 "data_size": 0 00:09:45.261 }, 00:09:45.261 { 00:09:45.261 "name": "BaseBdev3", 00:09:45.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.261 "is_configured": false, 00:09:45.261 "data_offset": 0, 00:09:45.261 "data_size": 0 00:09:45.261 }, 00:09:45.261 { 00:09:45.261 "name": "BaseBdev4", 00:09:45.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.261 "is_configured": false, 00:09:45.261 "data_offset": 0, 00:09:45.261 "data_size": 0 00:09:45.261 } 00:09:45.261 ] 00:09:45.261 }' 00:09:45.261 04:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.261 04:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.521 [2024-12-14 04:58:56.341262] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.521 [2024-12-14 04:58:56.341348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.521 [2024-12-14 04:58:56.349317] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.521 [2024-12-14 04:58:56.351305] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.521 [2024-12-14 04:58:56.351345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.521 [2024-12-14 04:58:56.351355] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:45.521 [2024-12-14 04:58:56.351364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:45.521 [2024-12-14 04:58:56.351370] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:45.521 [2024-12-14 04:58:56.351379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.521 "name": "Existed_Raid", 00:09:45.521 "uuid": "b7fdf073-796c-482e-9e83-8473c0815197", 00:09:45.521 "strip_size_kb": 64, 00:09:45.521 "state": "configuring", 00:09:45.521 "raid_level": "concat", 00:09:45.521 "superblock": true, 00:09:45.521 "num_base_bdevs": 4, 00:09:45.521 "num_base_bdevs_discovered": 1, 00:09:45.521 "num_base_bdevs_operational": 4, 00:09:45.521 "base_bdevs_list": [ 00:09:45.521 { 00:09:45.521 "name": "BaseBdev1", 00:09:45.521 "uuid": "d46199be-26c8-4e36-8f86-604926ad9e9b", 00:09:45.521 "is_configured": true, 00:09:45.521 "data_offset": 2048, 00:09:45.521 "data_size": 63488 00:09:45.521 }, 00:09:45.521 { 00:09:45.521 "name": "BaseBdev2", 00:09:45.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.521 "is_configured": false, 00:09:45.521 "data_offset": 0, 00:09:45.521 "data_size": 0 00:09:45.521 }, 00:09:45.521 { 00:09:45.521 "name": "BaseBdev3", 00:09:45.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.521 "is_configured": false, 00:09:45.521 "data_offset": 0, 00:09:45.521 "data_size": 0 00:09:45.521 }, 00:09:45.521 { 00:09:45.521 "name": "BaseBdev4", 00:09:45.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.521 "is_configured": false, 00:09:45.521 "data_offset": 0, 00:09:45.521 "data_size": 0 00:09:45.521 } 00:09:45.521 ] 00:09:45.521 }' 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.521 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.091 [2024-12-14 04:58:56.768456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.091 BaseBdev2 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.091 [ 00:09:46.091 { 00:09:46.091 "name": "BaseBdev2", 00:09:46.091 "aliases": [ 00:09:46.091 "22c2e313-ad18-461d-80b6-96586ca1883f" 00:09:46.091 ], 00:09:46.091 "product_name": "Malloc disk", 00:09:46.091 "block_size": 512, 00:09:46.091 "num_blocks": 65536, 00:09:46.091 "uuid": "22c2e313-ad18-461d-80b6-96586ca1883f", 00:09:46.091 "assigned_rate_limits": { 00:09:46.091 "rw_ios_per_sec": 0, 00:09:46.091 "rw_mbytes_per_sec": 0, 00:09:46.091 "r_mbytes_per_sec": 0, 00:09:46.091 "w_mbytes_per_sec": 0 00:09:46.091 }, 00:09:46.091 "claimed": true, 00:09:46.091 "claim_type": "exclusive_write", 00:09:46.091 "zoned": false, 00:09:46.091 "supported_io_types": { 00:09:46.091 "read": true, 00:09:46.091 "write": true, 00:09:46.091 "unmap": true, 00:09:46.091 "flush": true, 00:09:46.091 "reset": true, 00:09:46.091 "nvme_admin": false, 00:09:46.091 "nvme_io": false, 00:09:46.091 "nvme_io_md": false, 00:09:46.091 "write_zeroes": true, 00:09:46.091 "zcopy": true, 00:09:46.091 "get_zone_info": false, 00:09:46.091 "zone_management": false, 00:09:46.091 "zone_append": false, 00:09:46.091 "compare": false, 00:09:46.091 "compare_and_write": false, 00:09:46.091 "abort": true, 00:09:46.091 "seek_hole": false, 00:09:46.091 "seek_data": false, 00:09:46.091 "copy": true, 00:09:46.091 "nvme_iov_md": false 00:09:46.091 }, 00:09:46.091 "memory_domains": [ 00:09:46.091 { 00:09:46.091 "dma_device_id": "system", 00:09:46.091 "dma_device_type": 1 00:09:46.091 }, 00:09:46.091 { 00:09:46.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.091 "dma_device_type": 2 00:09:46.091 } 00:09:46.091 ], 00:09:46.091 "driver_specific": {} 00:09:46.091 } 00:09:46.091 ] 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.091 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.091 "name": "Existed_Raid", 00:09:46.091 "uuid": "b7fdf073-796c-482e-9e83-8473c0815197", 00:09:46.091 "strip_size_kb": 64, 00:09:46.091 "state": "configuring", 00:09:46.091 "raid_level": "concat", 00:09:46.091 "superblock": true, 00:09:46.091 "num_base_bdevs": 4, 00:09:46.091 "num_base_bdevs_discovered": 2, 00:09:46.091 "num_base_bdevs_operational": 4, 00:09:46.091 "base_bdevs_list": [ 00:09:46.091 { 00:09:46.091 "name": "BaseBdev1", 00:09:46.091 "uuid": "d46199be-26c8-4e36-8f86-604926ad9e9b", 00:09:46.091 "is_configured": true, 00:09:46.091 "data_offset": 2048, 00:09:46.091 "data_size": 63488 00:09:46.091 }, 00:09:46.092 { 00:09:46.092 "name": "BaseBdev2", 00:09:46.092 "uuid": "22c2e313-ad18-461d-80b6-96586ca1883f", 00:09:46.092 "is_configured": true, 00:09:46.092 "data_offset": 2048, 00:09:46.092 "data_size": 63488 00:09:46.092 }, 00:09:46.092 { 00:09:46.092 "name": "BaseBdev3", 00:09:46.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.092 "is_configured": false, 00:09:46.092 "data_offset": 0, 00:09:46.092 "data_size": 0 00:09:46.092 }, 00:09:46.092 { 00:09:46.092 "name": "BaseBdev4", 00:09:46.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.092 "is_configured": false, 00:09:46.092 "data_offset": 0, 00:09:46.092 "data_size": 0 00:09:46.092 } 00:09:46.092 ] 00:09:46.092 }' 00:09:46.092 04:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.092 04:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.351 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:46.351 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.351 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.611 [2024-12-14 04:58:57.242593] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:46.611 BaseBdev3 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.611 [ 00:09:46.611 { 00:09:46.611 "name": "BaseBdev3", 00:09:46.611 "aliases": [ 00:09:46.611 "1792fc6b-4c3e-4f99-834e-2bbf6516dba0" 00:09:46.611 ], 00:09:46.611 "product_name": "Malloc disk", 00:09:46.611 "block_size": 512, 00:09:46.611 "num_blocks": 65536, 00:09:46.611 "uuid": "1792fc6b-4c3e-4f99-834e-2bbf6516dba0", 00:09:46.611 "assigned_rate_limits": { 00:09:46.611 "rw_ios_per_sec": 0, 00:09:46.611 "rw_mbytes_per_sec": 0, 00:09:46.611 "r_mbytes_per_sec": 0, 00:09:46.611 "w_mbytes_per_sec": 0 00:09:46.611 }, 00:09:46.611 "claimed": true, 00:09:46.611 "claim_type": "exclusive_write", 00:09:46.611 "zoned": false, 00:09:46.611 "supported_io_types": { 00:09:46.611 "read": true, 00:09:46.611 "write": true, 00:09:46.611 "unmap": true, 00:09:46.611 "flush": true, 00:09:46.611 "reset": true, 00:09:46.611 "nvme_admin": false, 00:09:46.611 "nvme_io": false, 00:09:46.611 "nvme_io_md": false, 00:09:46.611 "write_zeroes": true, 00:09:46.611 "zcopy": true, 00:09:46.611 "get_zone_info": false, 00:09:46.611 "zone_management": false, 00:09:46.611 "zone_append": false, 00:09:46.611 "compare": false, 00:09:46.611 "compare_and_write": false, 00:09:46.611 "abort": true, 00:09:46.611 "seek_hole": false, 00:09:46.611 "seek_data": false, 00:09:46.611 "copy": true, 00:09:46.611 "nvme_iov_md": false 00:09:46.611 }, 00:09:46.611 "memory_domains": [ 00:09:46.611 { 00:09:46.611 "dma_device_id": "system", 00:09:46.611 "dma_device_type": 1 00:09:46.611 }, 00:09:46.611 { 00:09:46.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.611 "dma_device_type": 2 00:09:46.611 } 00:09:46.611 ], 00:09:46.611 "driver_specific": {} 00:09:46.611 } 00:09:46.611 ] 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.611 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.611 "name": "Existed_Raid", 00:09:46.611 "uuid": "b7fdf073-796c-482e-9e83-8473c0815197", 00:09:46.611 "strip_size_kb": 64, 00:09:46.611 "state": "configuring", 00:09:46.611 "raid_level": "concat", 00:09:46.611 "superblock": true, 00:09:46.611 "num_base_bdevs": 4, 00:09:46.611 "num_base_bdevs_discovered": 3, 00:09:46.611 "num_base_bdevs_operational": 4, 00:09:46.611 "base_bdevs_list": [ 00:09:46.611 { 00:09:46.611 "name": "BaseBdev1", 00:09:46.611 "uuid": "d46199be-26c8-4e36-8f86-604926ad9e9b", 00:09:46.611 "is_configured": true, 00:09:46.611 "data_offset": 2048, 00:09:46.611 "data_size": 63488 00:09:46.611 }, 00:09:46.611 { 00:09:46.611 "name": "BaseBdev2", 00:09:46.611 "uuid": "22c2e313-ad18-461d-80b6-96586ca1883f", 00:09:46.611 "is_configured": true, 00:09:46.611 "data_offset": 2048, 00:09:46.611 "data_size": 63488 00:09:46.611 }, 00:09:46.611 { 00:09:46.611 "name": "BaseBdev3", 00:09:46.611 "uuid": "1792fc6b-4c3e-4f99-834e-2bbf6516dba0", 00:09:46.611 "is_configured": true, 00:09:46.611 "data_offset": 2048, 00:09:46.611 "data_size": 63488 00:09:46.611 }, 00:09:46.611 { 00:09:46.611 "name": "BaseBdev4", 00:09:46.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.612 "is_configured": false, 00:09:46.612 "data_offset": 0, 00:09:46.612 "data_size": 0 00:09:46.612 } 00:09:46.612 ] 00:09:46.612 }' 00:09:46.612 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.612 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.871 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:46.871 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.871 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.871 [2024-12-14 04:58:57.736717] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:46.871 BaseBdev4 00:09:46.871 [2024-12-14 04:58:57.737029] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:46.871 [2024-12-14 04:58:57.737051] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:46.871 [2024-12-14 04:58:57.737327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:46.871 [2024-12-14 04:58:57.737468] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:46.871 [2024-12-14 04:58:57.737485] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:46.871 [2024-12-14 04:58:57.737616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.871 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.871 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:46.871 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:46.871 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:46.871 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:46.871 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:46.871 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:46.871 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:46.871 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.871 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.871 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.871 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:46.871 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.871 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.131 [ 00:09:47.131 { 00:09:47.131 "name": "BaseBdev4", 00:09:47.131 "aliases": [ 00:09:47.131 "6f69ec77-4950-487f-973e-50b9e2d4fe16" 00:09:47.131 ], 00:09:47.131 "product_name": "Malloc disk", 00:09:47.131 "block_size": 512, 00:09:47.131 "num_blocks": 65536, 00:09:47.131 "uuid": "6f69ec77-4950-487f-973e-50b9e2d4fe16", 00:09:47.131 "assigned_rate_limits": { 00:09:47.131 "rw_ios_per_sec": 0, 00:09:47.131 "rw_mbytes_per_sec": 0, 00:09:47.131 "r_mbytes_per_sec": 0, 00:09:47.131 "w_mbytes_per_sec": 0 00:09:47.131 }, 00:09:47.131 "claimed": true, 00:09:47.131 "claim_type": "exclusive_write", 00:09:47.131 "zoned": false, 00:09:47.131 "supported_io_types": { 00:09:47.131 "read": true, 00:09:47.131 "write": true, 00:09:47.131 "unmap": true, 00:09:47.131 "flush": true, 00:09:47.131 "reset": true, 00:09:47.131 "nvme_admin": false, 00:09:47.131 "nvme_io": false, 00:09:47.131 "nvme_io_md": false, 00:09:47.131 "write_zeroes": true, 00:09:47.131 "zcopy": true, 00:09:47.131 "get_zone_info": false, 00:09:47.131 "zone_management": false, 00:09:47.131 "zone_append": false, 00:09:47.131 "compare": false, 00:09:47.131 "compare_and_write": false, 00:09:47.131 "abort": true, 00:09:47.131 "seek_hole": false, 00:09:47.131 "seek_data": false, 00:09:47.131 "copy": true, 00:09:47.131 "nvme_iov_md": false 00:09:47.131 }, 00:09:47.131 "memory_domains": [ 00:09:47.131 { 00:09:47.131 "dma_device_id": "system", 00:09:47.131 "dma_device_type": 1 00:09:47.131 }, 00:09:47.131 { 00:09:47.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.131 "dma_device_type": 2 00:09:47.131 } 00:09:47.131 ], 00:09:47.131 "driver_specific": {} 00:09:47.131 } 00:09:47.131 ] 00:09:47.131 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.131 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:47.131 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:47.131 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:47.131 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:47.131 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.131 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.131 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.131 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.131 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.131 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.131 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.131 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.131 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.131 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.131 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.131 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.131 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.131 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.131 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.131 "name": "Existed_Raid", 00:09:47.131 "uuid": "b7fdf073-796c-482e-9e83-8473c0815197", 00:09:47.131 "strip_size_kb": 64, 00:09:47.131 "state": "online", 00:09:47.131 "raid_level": "concat", 00:09:47.131 "superblock": true, 00:09:47.131 "num_base_bdevs": 4, 00:09:47.131 "num_base_bdevs_discovered": 4, 00:09:47.131 "num_base_bdevs_operational": 4, 00:09:47.131 "base_bdevs_list": [ 00:09:47.131 { 00:09:47.132 "name": "BaseBdev1", 00:09:47.132 "uuid": "d46199be-26c8-4e36-8f86-604926ad9e9b", 00:09:47.132 "is_configured": true, 00:09:47.132 "data_offset": 2048, 00:09:47.132 "data_size": 63488 00:09:47.132 }, 00:09:47.132 { 00:09:47.132 "name": "BaseBdev2", 00:09:47.132 "uuid": "22c2e313-ad18-461d-80b6-96586ca1883f", 00:09:47.132 "is_configured": true, 00:09:47.132 "data_offset": 2048, 00:09:47.132 "data_size": 63488 00:09:47.132 }, 00:09:47.132 { 00:09:47.132 "name": "BaseBdev3", 00:09:47.132 "uuid": "1792fc6b-4c3e-4f99-834e-2bbf6516dba0", 00:09:47.132 "is_configured": true, 00:09:47.132 "data_offset": 2048, 00:09:47.132 "data_size": 63488 00:09:47.132 }, 00:09:47.132 { 00:09:47.132 "name": "BaseBdev4", 00:09:47.132 "uuid": "6f69ec77-4950-487f-973e-50b9e2d4fe16", 00:09:47.132 "is_configured": true, 00:09:47.132 "data_offset": 2048, 00:09:47.132 "data_size": 63488 00:09:47.132 } 00:09:47.132 ] 00:09:47.132 }' 00:09:47.132 04:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.132 04:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.391 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:47.391 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:47.391 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:47.391 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:47.391 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:47.391 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:47.391 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:47.391 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:47.391 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.391 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.391 [2024-12-14 04:58:58.164360] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.391 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.391 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.391 "name": "Existed_Raid", 00:09:47.391 "aliases": [ 00:09:47.391 "b7fdf073-796c-482e-9e83-8473c0815197" 00:09:47.391 ], 00:09:47.391 "product_name": "Raid Volume", 00:09:47.391 "block_size": 512, 00:09:47.391 "num_blocks": 253952, 00:09:47.391 "uuid": "b7fdf073-796c-482e-9e83-8473c0815197", 00:09:47.391 "assigned_rate_limits": { 00:09:47.391 "rw_ios_per_sec": 0, 00:09:47.391 "rw_mbytes_per_sec": 0, 00:09:47.391 "r_mbytes_per_sec": 0, 00:09:47.391 "w_mbytes_per_sec": 0 00:09:47.391 }, 00:09:47.391 "claimed": false, 00:09:47.391 "zoned": false, 00:09:47.391 "supported_io_types": { 00:09:47.391 "read": true, 00:09:47.392 "write": true, 00:09:47.392 "unmap": true, 00:09:47.392 "flush": true, 00:09:47.392 "reset": true, 00:09:47.392 "nvme_admin": false, 00:09:47.392 "nvme_io": false, 00:09:47.392 "nvme_io_md": false, 00:09:47.392 "write_zeroes": true, 00:09:47.392 "zcopy": false, 00:09:47.392 "get_zone_info": false, 00:09:47.392 "zone_management": false, 00:09:47.392 "zone_append": false, 00:09:47.392 "compare": false, 00:09:47.392 "compare_and_write": false, 00:09:47.392 "abort": false, 00:09:47.392 "seek_hole": false, 00:09:47.392 "seek_data": false, 00:09:47.392 "copy": false, 00:09:47.392 "nvme_iov_md": false 00:09:47.392 }, 00:09:47.392 "memory_domains": [ 00:09:47.392 { 00:09:47.392 "dma_device_id": "system", 00:09:47.392 "dma_device_type": 1 00:09:47.392 }, 00:09:47.392 { 00:09:47.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.392 "dma_device_type": 2 00:09:47.392 }, 00:09:47.392 { 00:09:47.392 "dma_device_id": "system", 00:09:47.392 "dma_device_type": 1 00:09:47.392 }, 00:09:47.392 { 00:09:47.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.392 "dma_device_type": 2 00:09:47.392 }, 00:09:47.392 { 00:09:47.392 "dma_device_id": "system", 00:09:47.392 "dma_device_type": 1 00:09:47.392 }, 00:09:47.392 { 00:09:47.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.392 "dma_device_type": 2 00:09:47.392 }, 00:09:47.392 { 00:09:47.392 "dma_device_id": "system", 00:09:47.392 "dma_device_type": 1 00:09:47.392 }, 00:09:47.392 { 00:09:47.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.392 "dma_device_type": 2 00:09:47.392 } 00:09:47.392 ], 00:09:47.392 "driver_specific": { 00:09:47.392 "raid": { 00:09:47.392 "uuid": "b7fdf073-796c-482e-9e83-8473c0815197", 00:09:47.392 "strip_size_kb": 64, 00:09:47.392 "state": "online", 00:09:47.392 "raid_level": "concat", 00:09:47.392 "superblock": true, 00:09:47.392 "num_base_bdevs": 4, 00:09:47.392 "num_base_bdevs_discovered": 4, 00:09:47.392 "num_base_bdevs_operational": 4, 00:09:47.392 "base_bdevs_list": [ 00:09:47.392 { 00:09:47.392 "name": "BaseBdev1", 00:09:47.392 "uuid": "d46199be-26c8-4e36-8f86-604926ad9e9b", 00:09:47.392 "is_configured": true, 00:09:47.392 "data_offset": 2048, 00:09:47.392 "data_size": 63488 00:09:47.392 }, 00:09:47.392 { 00:09:47.392 "name": "BaseBdev2", 00:09:47.392 "uuid": "22c2e313-ad18-461d-80b6-96586ca1883f", 00:09:47.392 "is_configured": true, 00:09:47.392 "data_offset": 2048, 00:09:47.392 "data_size": 63488 00:09:47.392 }, 00:09:47.392 { 00:09:47.392 "name": "BaseBdev3", 00:09:47.392 "uuid": "1792fc6b-4c3e-4f99-834e-2bbf6516dba0", 00:09:47.392 "is_configured": true, 00:09:47.392 "data_offset": 2048, 00:09:47.392 "data_size": 63488 00:09:47.392 }, 00:09:47.392 { 00:09:47.392 "name": "BaseBdev4", 00:09:47.392 "uuid": "6f69ec77-4950-487f-973e-50b9e2d4fe16", 00:09:47.392 "is_configured": true, 00:09:47.392 "data_offset": 2048, 00:09:47.392 "data_size": 63488 00:09:47.392 } 00:09:47.392 ] 00:09:47.392 } 00:09:47.392 } 00:09:47.392 }' 00:09:47.392 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.392 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:47.392 BaseBdev2 00:09:47.392 BaseBdev3 00:09:47.392 BaseBdev4' 00:09:47.392 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.392 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.652 [2024-12-14 04:58:58.467536] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:47.652 [2024-12-14 04:58:58.467563] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.652 [2024-12-14 04:58:58.467615] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.652 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.653 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.653 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.653 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.653 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.653 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.653 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.653 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.653 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.653 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.653 "name": "Existed_Raid", 00:09:47.653 "uuid": "b7fdf073-796c-482e-9e83-8473c0815197", 00:09:47.653 "strip_size_kb": 64, 00:09:47.653 "state": "offline", 00:09:47.653 "raid_level": "concat", 00:09:47.653 "superblock": true, 00:09:47.653 "num_base_bdevs": 4, 00:09:47.653 "num_base_bdevs_discovered": 3, 00:09:47.653 "num_base_bdevs_operational": 3, 00:09:47.653 "base_bdevs_list": [ 00:09:47.653 { 00:09:47.653 "name": null, 00:09:47.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.653 "is_configured": false, 00:09:47.653 "data_offset": 0, 00:09:47.653 "data_size": 63488 00:09:47.653 }, 00:09:47.653 { 00:09:47.653 "name": "BaseBdev2", 00:09:47.653 "uuid": "22c2e313-ad18-461d-80b6-96586ca1883f", 00:09:47.653 "is_configured": true, 00:09:47.653 "data_offset": 2048, 00:09:47.653 "data_size": 63488 00:09:47.653 }, 00:09:47.653 { 00:09:47.653 "name": "BaseBdev3", 00:09:47.653 "uuid": "1792fc6b-4c3e-4f99-834e-2bbf6516dba0", 00:09:47.653 "is_configured": true, 00:09:47.653 "data_offset": 2048, 00:09:47.653 "data_size": 63488 00:09:47.653 }, 00:09:47.653 { 00:09:47.653 "name": "BaseBdev4", 00:09:47.653 "uuid": "6f69ec77-4950-487f-973e-50b9e2d4fe16", 00:09:47.653 "is_configured": true, 00:09:47.653 "data_offset": 2048, 00:09:47.653 "data_size": 63488 00:09:47.653 } 00:09:47.653 ] 00:09:47.653 }' 00:09:47.653 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.913 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.174 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:48.174 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:48.174 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.174 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.174 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.174 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:48.174 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.175 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:48.175 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:48.175 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:48.175 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.175 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.175 [2024-12-14 04:58:58.945805] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:48.175 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.175 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:48.175 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:48.175 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.175 04:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:48.175 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.175 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.175 04:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.175 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:48.175 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:48.175 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:48.175 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.175 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.175 [2024-12-14 04:58:59.012974] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:48.175 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.175 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:48.175 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:48.175 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.175 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:48.175 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.175 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.175 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.436 [2024-12-14 04:58:59.083885] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:48.436 [2024-12-14 04:58:59.083992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.436 BaseBdev2 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.436 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.436 [ 00:09:48.436 { 00:09:48.436 "name": "BaseBdev2", 00:09:48.436 "aliases": [ 00:09:48.436 "0dd3a26a-793d-4ec2-b076-d006c6b471bd" 00:09:48.436 ], 00:09:48.437 "product_name": "Malloc disk", 00:09:48.437 "block_size": 512, 00:09:48.437 "num_blocks": 65536, 00:09:48.437 "uuid": "0dd3a26a-793d-4ec2-b076-d006c6b471bd", 00:09:48.437 "assigned_rate_limits": { 00:09:48.437 "rw_ios_per_sec": 0, 00:09:48.437 "rw_mbytes_per_sec": 0, 00:09:48.437 "r_mbytes_per_sec": 0, 00:09:48.437 "w_mbytes_per_sec": 0 00:09:48.437 }, 00:09:48.437 "claimed": false, 00:09:48.437 "zoned": false, 00:09:48.437 "supported_io_types": { 00:09:48.437 "read": true, 00:09:48.437 "write": true, 00:09:48.437 "unmap": true, 00:09:48.437 "flush": true, 00:09:48.437 "reset": true, 00:09:48.437 "nvme_admin": false, 00:09:48.437 "nvme_io": false, 00:09:48.437 "nvme_io_md": false, 00:09:48.437 "write_zeroes": true, 00:09:48.437 "zcopy": true, 00:09:48.437 "get_zone_info": false, 00:09:48.437 "zone_management": false, 00:09:48.437 "zone_append": false, 00:09:48.437 "compare": false, 00:09:48.437 "compare_and_write": false, 00:09:48.437 "abort": true, 00:09:48.437 "seek_hole": false, 00:09:48.437 "seek_data": false, 00:09:48.437 "copy": true, 00:09:48.437 "nvme_iov_md": false 00:09:48.437 }, 00:09:48.437 "memory_domains": [ 00:09:48.437 { 00:09:48.437 "dma_device_id": "system", 00:09:48.437 "dma_device_type": 1 00:09:48.437 }, 00:09:48.437 { 00:09:48.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.437 "dma_device_type": 2 00:09:48.437 } 00:09:48.437 ], 00:09:48.437 "driver_specific": {} 00:09:48.437 } 00:09:48.437 ] 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.437 BaseBdev3 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.437 [ 00:09:48.437 { 00:09:48.437 "name": "BaseBdev3", 00:09:48.437 "aliases": [ 00:09:48.437 "0ce16647-1bcb-4aa4-9a69-f9ff8863abac" 00:09:48.437 ], 00:09:48.437 "product_name": "Malloc disk", 00:09:48.437 "block_size": 512, 00:09:48.437 "num_blocks": 65536, 00:09:48.437 "uuid": "0ce16647-1bcb-4aa4-9a69-f9ff8863abac", 00:09:48.437 "assigned_rate_limits": { 00:09:48.437 "rw_ios_per_sec": 0, 00:09:48.437 "rw_mbytes_per_sec": 0, 00:09:48.437 "r_mbytes_per_sec": 0, 00:09:48.437 "w_mbytes_per_sec": 0 00:09:48.437 }, 00:09:48.437 "claimed": false, 00:09:48.437 "zoned": false, 00:09:48.437 "supported_io_types": { 00:09:48.437 "read": true, 00:09:48.437 "write": true, 00:09:48.437 "unmap": true, 00:09:48.437 "flush": true, 00:09:48.437 "reset": true, 00:09:48.437 "nvme_admin": false, 00:09:48.437 "nvme_io": false, 00:09:48.437 "nvme_io_md": false, 00:09:48.437 "write_zeroes": true, 00:09:48.437 "zcopy": true, 00:09:48.437 "get_zone_info": false, 00:09:48.437 "zone_management": false, 00:09:48.437 "zone_append": false, 00:09:48.437 "compare": false, 00:09:48.437 "compare_and_write": false, 00:09:48.437 "abort": true, 00:09:48.437 "seek_hole": false, 00:09:48.437 "seek_data": false, 00:09:48.437 "copy": true, 00:09:48.437 "nvme_iov_md": false 00:09:48.437 }, 00:09:48.437 "memory_domains": [ 00:09:48.437 { 00:09:48.437 "dma_device_id": "system", 00:09:48.437 "dma_device_type": 1 00:09:48.437 }, 00:09:48.437 { 00:09:48.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.437 "dma_device_type": 2 00:09:48.437 } 00:09:48.437 ], 00:09:48.437 "driver_specific": {} 00:09:48.437 } 00:09:48.437 ] 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.437 BaseBdev4 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.437 [ 00:09:48.437 { 00:09:48.437 "name": "BaseBdev4", 00:09:48.437 "aliases": [ 00:09:48.437 "03346b85-5cc0-4a74-ad2c-f37714dd1b5c" 00:09:48.437 ], 00:09:48.437 "product_name": "Malloc disk", 00:09:48.437 "block_size": 512, 00:09:48.437 "num_blocks": 65536, 00:09:48.437 "uuid": "03346b85-5cc0-4a74-ad2c-f37714dd1b5c", 00:09:48.437 "assigned_rate_limits": { 00:09:48.437 "rw_ios_per_sec": 0, 00:09:48.437 "rw_mbytes_per_sec": 0, 00:09:48.437 "r_mbytes_per_sec": 0, 00:09:48.437 "w_mbytes_per_sec": 0 00:09:48.437 }, 00:09:48.437 "claimed": false, 00:09:48.437 "zoned": false, 00:09:48.437 "supported_io_types": { 00:09:48.437 "read": true, 00:09:48.437 "write": true, 00:09:48.437 "unmap": true, 00:09:48.437 "flush": true, 00:09:48.437 "reset": true, 00:09:48.437 "nvme_admin": false, 00:09:48.437 "nvme_io": false, 00:09:48.437 "nvme_io_md": false, 00:09:48.437 "write_zeroes": true, 00:09:48.437 "zcopy": true, 00:09:48.437 "get_zone_info": false, 00:09:48.437 "zone_management": false, 00:09:48.437 "zone_append": false, 00:09:48.437 "compare": false, 00:09:48.437 "compare_and_write": false, 00:09:48.437 "abort": true, 00:09:48.437 "seek_hole": false, 00:09:48.437 "seek_data": false, 00:09:48.437 "copy": true, 00:09:48.437 "nvme_iov_md": false 00:09:48.437 }, 00:09:48.437 "memory_domains": [ 00:09:48.437 { 00:09:48.437 "dma_device_id": "system", 00:09:48.437 "dma_device_type": 1 00:09:48.437 }, 00:09:48.437 { 00:09:48.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.437 "dma_device_type": 2 00:09:48.437 } 00:09:48.437 ], 00:09:48.437 "driver_specific": {} 00:09:48.437 } 00:09:48.437 ] 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.437 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.437 [2024-12-14 04:58:59.299056] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:48.437 [2024-12-14 04:58:59.299141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:48.437 [2024-12-14 04:58:59.299220] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:48.437 [2024-12-14 04:58:59.300999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:48.437 [2024-12-14 04:58:59.301102] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:48.438 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.438 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:48.438 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.438 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.438 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.438 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.438 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.438 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.438 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.438 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.438 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.438 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.438 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.438 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.438 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.697 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.697 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.697 "name": "Existed_Raid", 00:09:48.697 "uuid": "abaafb43-b384-4a01-8af4-8dba2c4de9ce", 00:09:48.697 "strip_size_kb": 64, 00:09:48.697 "state": "configuring", 00:09:48.697 "raid_level": "concat", 00:09:48.697 "superblock": true, 00:09:48.697 "num_base_bdevs": 4, 00:09:48.697 "num_base_bdevs_discovered": 3, 00:09:48.697 "num_base_bdevs_operational": 4, 00:09:48.697 "base_bdevs_list": [ 00:09:48.697 { 00:09:48.697 "name": "BaseBdev1", 00:09:48.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.697 "is_configured": false, 00:09:48.697 "data_offset": 0, 00:09:48.697 "data_size": 0 00:09:48.697 }, 00:09:48.697 { 00:09:48.697 "name": "BaseBdev2", 00:09:48.697 "uuid": "0dd3a26a-793d-4ec2-b076-d006c6b471bd", 00:09:48.697 "is_configured": true, 00:09:48.697 "data_offset": 2048, 00:09:48.697 "data_size": 63488 00:09:48.697 }, 00:09:48.697 { 00:09:48.697 "name": "BaseBdev3", 00:09:48.697 "uuid": "0ce16647-1bcb-4aa4-9a69-f9ff8863abac", 00:09:48.697 "is_configured": true, 00:09:48.697 "data_offset": 2048, 00:09:48.697 "data_size": 63488 00:09:48.697 }, 00:09:48.697 { 00:09:48.697 "name": "BaseBdev4", 00:09:48.697 "uuid": "03346b85-5cc0-4a74-ad2c-f37714dd1b5c", 00:09:48.697 "is_configured": true, 00:09:48.697 "data_offset": 2048, 00:09:48.697 "data_size": 63488 00:09:48.697 } 00:09:48.697 ] 00:09:48.697 }' 00:09:48.697 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.697 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.957 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:48.957 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.957 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.957 [2024-12-14 04:58:59.762258] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:48.957 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.957 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:48.957 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.957 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.957 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.957 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.957 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.957 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.957 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.957 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.957 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.957 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.957 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.957 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.957 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.957 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.957 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.957 "name": "Existed_Raid", 00:09:48.957 "uuid": "abaafb43-b384-4a01-8af4-8dba2c4de9ce", 00:09:48.957 "strip_size_kb": 64, 00:09:48.957 "state": "configuring", 00:09:48.957 "raid_level": "concat", 00:09:48.957 "superblock": true, 00:09:48.957 "num_base_bdevs": 4, 00:09:48.957 "num_base_bdevs_discovered": 2, 00:09:48.957 "num_base_bdevs_operational": 4, 00:09:48.957 "base_bdevs_list": [ 00:09:48.957 { 00:09:48.957 "name": "BaseBdev1", 00:09:48.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.957 "is_configured": false, 00:09:48.957 "data_offset": 0, 00:09:48.957 "data_size": 0 00:09:48.957 }, 00:09:48.957 { 00:09:48.957 "name": null, 00:09:48.957 "uuid": "0dd3a26a-793d-4ec2-b076-d006c6b471bd", 00:09:48.957 "is_configured": false, 00:09:48.957 "data_offset": 0, 00:09:48.957 "data_size": 63488 00:09:48.957 }, 00:09:48.957 { 00:09:48.957 "name": "BaseBdev3", 00:09:48.957 "uuid": "0ce16647-1bcb-4aa4-9a69-f9ff8863abac", 00:09:48.957 "is_configured": true, 00:09:48.957 "data_offset": 2048, 00:09:48.957 "data_size": 63488 00:09:48.957 }, 00:09:48.957 { 00:09:48.957 "name": "BaseBdev4", 00:09:48.957 "uuid": "03346b85-5cc0-4a74-ad2c-f37714dd1b5c", 00:09:48.957 "is_configured": true, 00:09:48.957 "data_offset": 2048, 00:09:48.957 "data_size": 63488 00:09:48.957 } 00:09:48.957 ] 00:09:48.957 }' 00:09:48.957 04:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.957 04:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.526 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:49.526 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.526 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.526 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.526 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.526 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:49.526 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:49.526 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.526 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.526 [2024-12-14 04:59:00.296333] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.526 BaseBdev1 00:09:49.526 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.526 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:49.526 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:49.526 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:49.526 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.527 [ 00:09:49.527 { 00:09:49.527 "name": "BaseBdev1", 00:09:49.527 "aliases": [ 00:09:49.527 "ffb21b94-d5fb-42f9-ad0c-a99fda873343" 00:09:49.527 ], 00:09:49.527 "product_name": "Malloc disk", 00:09:49.527 "block_size": 512, 00:09:49.527 "num_blocks": 65536, 00:09:49.527 "uuid": "ffb21b94-d5fb-42f9-ad0c-a99fda873343", 00:09:49.527 "assigned_rate_limits": { 00:09:49.527 "rw_ios_per_sec": 0, 00:09:49.527 "rw_mbytes_per_sec": 0, 00:09:49.527 "r_mbytes_per_sec": 0, 00:09:49.527 "w_mbytes_per_sec": 0 00:09:49.527 }, 00:09:49.527 "claimed": true, 00:09:49.527 "claim_type": "exclusive_write", 00:09:49.527 "zoned": false, 00:09:49.527 "supported_io_types": { 00:09:49.527 "read": true, 00:09:49.527 "write": true, 00:09:49.527 "unmap": true, 00:09:49.527 "flush": true, 00:09:49.527 "reset": true, 00:09:49.527 "nvme_admin": false, 00:09:49.527 "nvme_io": false, 00:09:49.527 "nvme_io_md": false, 00:09:49.527 "write_zeroes": true, 00:09:49.527 "zcopy": true, 00:09:49.527 "get_zone_info": false, 00:09:49.527 "zone_management": false, 00:09:49.527 "zone_append": false, 00:09:49.527 "compare": false, 00:09:49.527 "compare_and_write": false, 00:09:49.527 "abort": true, 00:09:49.527 "seek_hole": false, 00:09:49.527 "seek_data": false, 00:09:49.527 "copy": true, 00:09:49.527 "nvme_iov_md": false 00:09:49.527 }, 00:09:49.527 "memory_domains": [ 00:09:49.527 { 00:09:49.527 "dma_device_id": "system", 00:09:49.527 "dma_device_type": 1 00:09:49.527 }, 00:09:49.527 { 00:09:49.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.527 "dma_device_type": 2 00:09:49.527 } 00:09:49.527 ], 00:09:49.527 "driver_specific": {} 00:09:49.527 } 00:09:49.527 ] 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.527 "name": "Existed_Raid", 00:09:49.527 "uuid": "abaafb43-b384-4a01-8af4-8dba2c4de9ce", 00:09:49.527 "strip_size_kb": 64, 00:09:49.527 "state": "configuring", 00:09:49.527 "raid_level": "concat", 00:09:49.527 "superblock": true, 00:09:49.527 "num_base_bdevs": 4, 00:09:49.527 "num_base_bdevs_discovered": 3, 00:09:49.527 "num_base_bdevs_operational": 4, 00:09:49.527 "base_bdevs_list": [ 00:09:49.527 { 00:09:49.527 "name": "BaseBdev1", 00:09:49.527 "uuid": "ffb21b94-d5fb-42f9-ad0c-a99fda873343", 00:09:49.527 "is_configured": true, 00:09:49.527 "data_offset": 2048, 00:09:49.527 "data_size": 63488 00:09:49.527 }, 00:09:49.527 { 00:09:49.527 "name": null, 00:09:49.527 "uuid": "0dd3a26a-793d-4ec2-b076-d006c6b471bd", 00:09:49.527 "is_configured": false, 00:09:49.527 "data_offset": 0, 00:09:49.527 "data_size": 63488 00:09:49.527 }, 00:09:49.527 { 00:09:49.527 "name": "BaseBdev3", 00:09:49.527 "uuid": "0ce16647-1bcb-4aa4-9a69-f9ff8863abac", 00:09:49.527 "is_configured": true, 00:09:49.527 "data_offset": 2048, 00:09:49.527 "data_size": 63488 00:09:49.527 }, 00:09:49.527 { 00:09:49.527 "name": "BaseBdev4", 00:09:49.527 "uuid": "03346b85-5cc0-4a74-ad2c-f37714dd1b5c", 00:09:49.527 "is_configured": true, 00:09:49.527 "data_offset": 2048, 00:09:49.527 "data_size": 63488 00:09:49.527 } 00:09:49.527 ] 00:09:49.527 }' 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.527 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.096 [2024-12-14 04:59:00.835423] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.096 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.096 "name": "Existed_Raid", 00:09:50.096 "uuid": "abaafb43-b384-4a01-8af4-8dba2c4de9ce", 00:09:50.097 "strip_size_kb": 64, 00:09:50.097 "state": "configuring", 00:09:50.097 "raid_level": "concat", 00:09:50.097 "superblock": true, 00:09:50.097 "num_base_bdevs": 4, 00:09:50.097 "num_base_bdevs_discovered": 2, 00:09:50.097 "num_base_bdevs_operational": 4, 00:09:50.097 "base_bdevs_list": [ 00:09:50.097 { 00:09:50.097 "name": "BaseBdev1", 00:09:50.097 "uuid": "ffb21b94-d5fb-42f9-ad0c-a99fda873343", 00:09:50.097 "is_configured": true, 00:09:50.097 "data_offset": 2048, 00:09:50.097 "data_size": 63488 00:09:50.097 }, 00:09:50.097 { 00:09:50.097 "name": null, 00:09:50.097 "uuid": "0dd3a26a-793d-4ec2-b076-d006c6b471bd", 00:09:50.097 "is_configured": false, 00:09:50.097 "data_offset": 0, 00:09:50.097 "data_size": 63488 00:09:50.097 }, 00:09:50.097 { 00:09:50.097 "name": null, 00:09:50.097 "uuid": "0ce16647-1bcb-4aa4-9a69-f9ff8863abac", 00:09:50.097 "is_configured": false, 00:09:50.097 "data_offset": 0, 00:09:50.097 "data_size": 63488 00:09:50.097 }, 00:09:50.097 { 00:09:50.097 "name": "BaseBdev4", 00:09:50.097 "uuid": "03346b85-5cc0-4a74-ad2c-f37714dd1b5c", 00:09:50.097 "is_configured": true, 00:09:50.097 "data_offset": 2048, 00:09:50.097 "data_size": 63488 00:09:50.097 } 00:09:50.097 ] 00:09:50.097 }' 00:09:50.097 04:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.097 04:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.666 [2024-12-14 04:59:01.342640] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.666 "name": "Existed_Raid", 00:09:50.666 "uuid": "abaafb43-b384-4a01-8af4-8dba2c4de9ce", 00:09:50.666 "strip_size_kb": 64, 00:09:50.666 "state": "configuring", 00:09:50.666 "raid_level": "concat", 00:09:50.666 "superblock": true, 00:09:50.666 "num_base_bdevs": 4, 00:09:50.666 "num_base_bdevs_discovered": 3, 00:09:50.666 "num_base_bdevs_operational": 4, 00:09:50.666 "base_bdevs_list": [ 00:09:50.666 { 00:09:50.666 "name": "BaseBdev1", 00:09:50.666 "uuid": "ffb21b94-d5fb-42f9-ad0c-a99fda873343", 00:09:50.666 "is_configured": true, 00:09:50.666 "data_offset": 2048, 00:09:50.666 "data_size": 63488 00:09:50.666 }, 00:09:50.666 { 00:09:50.666 "name": null, 00:09:50.666 "uuid": "0dd3a26a-793d-4ec2-b076-d006c6b471bd", 00:09:50.666 "is_configured": false, 00:09:50.666 "data_offset": 0, 00:09:50.666 "data_size": 63488 00:09:50.666 }, 00:09:50.666 { 00:09:50.666 "name": "BaseBdev3", 00:09:50.666 "uuid": "0ce16647-1bcb-4aa4-9a69-f9ff8863abac", 00:09:50.666 "is_configured": true, 00:09:50.666 "data_offset": 2048, 00:09:50.666 "data_size": 63488 00:09:50.666 }, 00:09:50.666 { 00:09:50.666 "name": "BaseBdev4", 00:09:50.666 "uuid": "03346b85-5cc0-4a74-ad2c-f37714dd1b5c", 00:09:50.666 "is_configured": true, 00:09:50.666 "data_offset": 2048, 00:09:50.666 "data_size": 63488 00:09:50.666 } 00:09:50.666 ] 00:09:50.666 }' 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.666 04:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.251 [2024-12-14 04:59:01.869755] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.251 "name": "Existed_Raid", 00:09:51.251 "uuid": "abaafb43-b384-4a01-8af4-8dba2c4de9ce", 00:09:51.251 "strip_size_kb": 64, 00:09:51.251 "state": "configuring", 00:09:51.251 "raid_level": "concat", 00:09:51.251 "superblock": true, 00:09:51.251 "num_base_bdevs": 4, 00:09:51.251 "num_base_bdevs_discovered": 2, 00:09:51.251 "num_base_bdevs_operational": 4, 00:09:51.251 "base_bdevs_list": [ 00:09:51.251 { 00:09:51.251 "name": null, 00:09:51.251 "uuid": "ffb21b94-d5fb-42f9-ad0c-a99fda873343", 00:09:51.251 "is_configured": false, 00:09:51.251 "data_offset": 0, 00:09:51.251 "data_size": 63488 00:09:51.251 }, 00:09:51.251 { 00:09:51.251 "name": null, 00:09:51.251 "uuid": "0dd3a26a-793d-4ec2-b076-d006c6b471bd", 00:09:51.251 "is_configured": false, 00:09:51.251 "data_offset": 0, 00:09:51.251 "data_size": 63488 00:09:51.251 }, 00:09:51.251 { 00:09:51.251 "name": "BaseBdev3", 00:09:51.251 "uuid": "0ce16647-1bcb-4aa4-9a69-f9ff8863abac", 00:09:51.251 "is_configured": true, 00:09:51.251 "data_offset": 2048, 00:09:51.251 "data_size": 63488 00:09:51.251 }, 00:09:51.251 { 00:09:51.251 "name": "BaseBdev4", 00:09:51.251 "uuid": "03346b85-5cc0-4a74-ad2c-f37714dd1b5c", 00:09:51.251 "is_configured": true, 00:09:51.251 "data_offset": 2048, 00:09:51.251 "data_size": 63488 00:09:51.251 } 00:09:51.251 ] 00:09:51.251 }' 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.251 04:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.511 [2024-12-14 04:59:02.363292] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.511 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.770 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.770 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.770 "name": "Existed_Raid", 00:09:51.770 "uuid": "abaafb43-b384-4a01-8af4-8dba2c4de9ce", 00:09:51.770 "strip_size_kb": 64, 00:09:51.770 "state": "configuring", 00:09:51.770 "raid_level": "concat", 00:09:51.770 "superblock": true, 00:09:51.770 "num_base_bdevs": 4, 00:09:51.770 "num_base_bdevs_discovered": 3, 00:09:51.770 "num_base_bdevs_operational": 4, 00:09:51.770 "base_bdevs_list": [ 00:09:51.770 { 00:09:51.770 "name": null, 00:09:51.771 "uuid": "ffb21b94-d5fb-42f9-ad0c-a99fda873343", 00:09:51.771 "is_configured": false, 00:09:51.771 "data_offset": 0, 00:09:51.771 "data_size": 63488 00:09:51.771 }, 00:09:51.771 { 00:09:51.771 "name": "BaseBdev2", 00:09:51.771 "uuid": "0dd3a26a-793d-4ec2-b076-d006c6b471bd", 00:09:51.771 "is_configured": true, 00:09:51.771 "data_offset": 2048, 00:09:51.771 "data_size": 63488 00:09:51.771 }, 00:09:51.771 { 00:09:51.771 "name": "BaseBdev3", 00:09:51.771 "uuid": "0ce16647-1bcb-4aa4-9a69-f9ff8863abac", 00:09:51.771 "is_configured": true, 00:09:51.771 "data_offset": 2048, 00:09:51.771 "data_size": 63488 00:09:51.771 }, 00:09:51.771 { 00:09:51.771 "name": "BaseBdev4", 00:09:51.771 "uuid": "03346b85-5cc0-4a74-ad2c-f37714dd1b5c", 00:09:51.771 "is_configured": true, 00:09:51.771 "data_offset": 2048, 00:09:51.771 "data_size": 63488 00:09:51.771 } 00:09:51.771 ] 00:09:51.771 }' 00:09:51.771 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.771 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ffb21b94-d5fb-42f9-ad0c-a99fda873343 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.031 [2024-12-14 04:59:02.865325] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:52.031 [2024-12-14 04:59:02.865520] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:52.031 [2024-12-14 04:59:02.865534] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:52.031 NewBaseBdev 00:09:52.031 [2024-12-14 04:59:02.865786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:52.031 [2024-12-14 04:59:02.865914] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:52.031 [2024-12-14 04:59:02.865930] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:52.031 [2024-12-14 04:59:02.866031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.031 [ 00:09:52.031 { 00:09:52.031 "name": "NewBaseBdev", 00:09:52.031 "aliases": [ 00:09:52.031 "ffb21b94-d5fb-42f9-ad0c-a99fda873343" 00:09:52.031 ], 00:09:52.031 "product_name": "Malloc disk", 00:09:52.031 "block_size": 512, 00:09:52.031 "num_blocks": 65536, 00:09:52.031 "uuid": "ffb21b94-d5fb-42f9-ad0c-a99fda873343", 00:09:52.031 "assigned_rate_limits": { 00:09:52.031 "rw_ios_per_sec": 0, 00:09:52.031 "rw_mbytes_per_sec": 0, 00:09:52.031 "r_mbytes_per_sec": 0, 00:09:52.031 "w_mbytes_per_sec": 0 00:09:52.031 }, 00:09:52.031 "claimed": true, 00:09:52.031 "claim_type": "exclusive_write", 00:09:52.031 "zoned": false, 00:09:52.031 "supported_io_types": { 00:09:52.031 "read": true, 00:09:52.031 "write": true, 00:09:52.031 "unmap": true, 00:09:52.031 "flush": true, 00:09:52.031 "reset": true, 00:09:52.031 "nvme_admin": false, 00:09:52.031 "nvme_io": false, 00:09:52.031 "nvme_io_md": false, 00:09:52.031 "write_zeroes": true, 00:09:52.031 "zcopy": true, 00:09:52.031 "get_zone_info": false, 00:09:52.031 "zone_management": false, 00:09:52.031 "zone_append": false, 00:09:52.031 "compare": false, 00:09:52.031 "compare_and_write": false, 00:09:52.031 "abort": true, 00:09:52.031 "seek_hole": false, 00:09:52.031 "seek_data": false, 00:09:52.031 "copy": true, 00:09:52.031 "nvme_iov_md": false 00:09:52.031 }, 00:09:52.031 "memory_domains": [ 00:09:52.031 { 00:09:52.031 "dma_device_id": "system", 00:09:52.031 "dma_device_type": 1 00:09:52.031 }, 00:09:52.031 { 00:09:52.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.031 "dma_device_type": 2 00:09:52.031 } 00:09:52.031 ], 00:09:52.031 "driver_specific": {} 00:09:52.031 } 00:09:52.031 ] 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.031 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.032 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.032 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.032 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.032 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.032 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.291 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.291 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.291 "name": "Existed_Raid", 00:09:52.291 "uuid": "abaafb43-b384-4a01-8af4-8dba2c4de9ce", 00:09:52.291 "strip_size_kb": 64, 00:09:52.291 "state": "online", 00:09:52.291 "raid_level": "concat", 00:09:52.291 "superblock": true, 00:09:52.291 "num_base_bdevs": 4, 00:09:52.291 "num_base_bdevs_discovered": 4, 00:09:52.291 "num_base_bdevs_operational": 4, 00:09:52.291 "base_bdevs_list": [ 00:09:52.291 { 00:09:52.291 "name": "NewBaseBdev", 00:09:52.291 "uuid": "ffb21b94-d5fb-42f9-ad0c-a99fda873343", 00:09:52.291 "is_configured": true, 00:09:52.291 "data_offset": 2048, 00:09:52.291 "data_size": 63488 00:09:52.291 }, 00:09:52.291 { 00:09:52.291 "name": "BaseBdev2", 00:09:52.291 "uuid": "0dd3a26a-793d-4ec2-b076-d006c6b471bd", 00:09:52.291 "is_configured": true, 00:09:52.291 "data_offset": 2048, 00:09:52.291 "data_size": 63488 00:09:52.291 }, 00:09:52.291 { 00:09:52.291 "name": "BaseBdev3", 00:09:52.291 "uuid": "0ce16647-1bcb-4aa4-9a69-f9ff8863abac", 00:09:52.291 "is_configured": true, 00:09:52.291 "data_offset": 2048, 00:09:52.291 "data_size": 63488 00:09:52.291 }, 00:09:52.291 { 00:09:52.291 "name": "BaseBdev4", 00:09:52.291 "uuid": "03346b85-5cc0-4a74-ad2c-f37714dd1b5c", 00:09:52.291 "is_configured": true, 00:09:52.291 "data_offset": 2048, 00:09:52.291 "data_size": 63488 00:09:52.291 } 00:09:52.291 ] 00:09:52.291 }' 00:09:52.291 04:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.291 04:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.551 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:52.551 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:52.551 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:52.551 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:52.551 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:52.551 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:52.551 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:52.551 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:52.551 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.551 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.551 [2024-12-14 04:59:03.380768] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.551 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.551 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:52.551 "name": "Existed_Raid", 00:09:52.551 "aliases": [ 00:09:52.551 "abaafb43-b384-4a01-8af4-8dba2c4de9ce" 00:09:52.551 ], 00:09:52.551 "product_name": "Raid Volume", 00:09:52.551 "block_size": 512, 00:09:52.551 "num_blocks": 253952, 00:09:52.551 "uuid": "abaafb43-b384-4a01-8af4-8dba2c4de9ce", 00:09:52.551 "assigned_rate_limits": { 00:09:52.551 "rw_ios_per_sec": 0, 00:09:52.551 "rw_mbytes_per_sec": 0, 00:09:52.551 "r_mbytes_per_sec": 0, 00:09:52.551 "w_mbytes_per_sec": 0 00:09:52.551 }, 00:09:52.551 "claimed": false, 00:09:52.551 "zoned": false, 00:09:52.551 "supported_io_types": { 00:09:52.551 "read": true, 00:09:52.551 "write": true, 00:09:52.551 "unmap": true, 00:09:52.551 "flush": true, 00:09:52.551 "reset": true, 00:09:52.551 "nvme_admin": false, 00:09:52.551 "nvme_io": false, 00:09:52.551 "nvme_io_md": false, 00:09:52.551 "write_zeroes": true, 00:09:52.551 "zcopy": false, 00:09:52.551 "get_zone_info": false, 00:09:52.551 "zone_management": false, 00:09:52.551 "zone_append": false, 00:09:52.551 "compare": false, 00:09:52.551 "compare_and_write": false, 00:09:52.551 "abort": false, 00:09:52.551 "seek_hole": false, 00:09:52.551 "seek_data": false, 00:09:52.551 "copy": false, 00:09:52.551 "nvme_iov_md": false 00:09:52.551 }, 00:09:52.551 "memory_domains": [ 00:09:52.551 { 00:09:52.551 "dma_device_id": "system", 00:09:52.551 "dma_device_type": 1 00:09:52.551 }, 00:09:52.551 { 00:09:52.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.551 "dma_device_type": 2 00:09:52.551 }, 00:09:52.551 { 00:09:52.551 "dma_device_id": "system", 00:09:52.551 "dma_device_type": 1 00:09:52.551 }, 00:09:52.551 { 00:09:52.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.551 "dma_device_type": 2 00:09:52.551 }, 00:09:52.551 { 00:09:52.551 "dma_device_id": "system", 00:09:52.551 "dma_device_type": 1 00:09:52.551 }, 00:09:52.551 { 00:09:52.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.551 "dma_device_type": 2 00:09:52.551 }, 00:09:52.551 { 00:09:52.551 "dma_device_id": "system", 00:09:52.551 "dma_device_type": 1 00:09:52.551 }, 00:09:52.551 { 00:09:52.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.551 "dma_device_type": 2 00:09:52.551 } 00:09:52.551 ], 00:09:52.551 "driver_specific": { 00:09:52.551 "raid": { 00:09:52.551 "uuid": "abaafb43-b384-4a01-8af4-8dba2c4de9ce", 00:09:52.551 "strip_size_kb": 64, 00:09:52.551 "state": "online", 00:09:52.551 "raid_level": "concat", 00:09:52.551 "superblock": true, 00:09:52.551 "num_base_bdevs": 4, 00:09:52.551 "num_base_bdevs_discovered": 4, 00:09:52.551 "num_base_bdevs_operational": 4, 00:09:52.551 "base_bdevs_list": [ 00:09:52.551 { 00:09:52.551 "name": "NewBaseBdev", 00:09:52.551 "uuid": "ffb21b94-d5fb-42f9-ad0c-a99fda873343", 00:09:52.551 "is_configured": true, 00:09:52.551 "data_offset": 2048, 00:09:52.551 "data_size": 63488 00:09:52.551 }, 00:09:52.551 { 00:09:52.551 "name": "BaseBdev2", 00:09:52.551 "uuid": "0dd3a26a-793d-4ec2-b076-d006c6b471bd", 00:09:52.551 "is_configured": true, 00:09:52.551 "data_offset": 2048, 00:09:52.551 "data_size": 63488 00:09:52.551 }, 00:09:52.551 { 00:09:52.551 "name": "BaseBdev3", 00:09:52.551 "uuid": "0ce16647-1bcb-4aa4-9a69-f9ff8863abac", 00:09:52.551 "is_configured": true, 00:09:52.551 "data_offset": 2048, 00:09:52.551 "data_size": 63488 00:09:52.551 }, 00:09:52.551 { 00:09:52.551 "name": "BaseBdev4", 00:09:52.551 "uuid": "03346b85-5cc0-4a74-ad2c-f37714dd1b5c", 00:09:52.551 "is_configured": true, 00:09:52.551 "data_offset": 2048, 00:09:52.551 "data_size": 63488 00:09:52.551 } 00:09:52.551 ] 00:09:52.551 } 00:09:52.551 } 00:09:52.551 }' 00:09:52.551 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:52.812 BaseBdev2 00:09:52.812 BaseBdev3 00:09:52.812 BaseBdev4' 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.812 [2024-12-14 04:59:03.667971] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:52.812 [2024-12-14 04:59:03.667998] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:52.812 [2024-12-14 04:59:03.668071] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.812 [2024-12-14 04:59:03.668133] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.812 [2024-12-14 04:59:03.668143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82827 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82827 ']' 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 82827 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:52.812 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82827 00:09:53.072 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:53.072 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:53.072 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82827' 00:09:53.072 killing process with pid 82827 00:09:53.072 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 82827 00:09:53.072 [2024-12-14 04:59:03.714974] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:53.072 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 82827 00:09:53.072 [2024-12-14 04:59:03.754848] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.332 04:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:53.332 00:09:53.332 real 0m9.570s 00:09:53.332 user 0m16.384s 00:09:53.332 sys 0m1.954s 00:09:53.332 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.332 ************************************ 00:09:53.332 END TEST raid_state_function_test_sb 00:09:53.332 ************************************ 00:09:53.332 04:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.332 04:59:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:09:53.332 04:59:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:53.332 04:59:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.332 04:59:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.332 ************************************ 00:09:53.332 START TEST raid_superblock_test 00:09:53.332 ************************************ 00:09:53.332 04:59:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:09:53.332 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:53.332 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:53.332 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:53.332 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:53.332 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:53.332 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:53.332 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:53.332 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:53.332 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:53.332 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:53.332 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:53.332 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:53.332 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:53.332 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:53.333 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:53.333 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:53.333 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83475 00:09:53.333 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:53.333 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83475 00:09:53.333 04:59:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 83475 ']' 00:09:53.333 04:59:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.333 04:59:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:53.333 04:59:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.333 04:59:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:53.333 04:59:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.333 [2024-12-14 04:59:04.144352] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:53.333 [2024-12-14 04:59:04.144607] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83475 ] 00:09:53.593 [2024-12-14 04:59:04.303361] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.593 [2024-12-14 04:59:04.348780] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.593 [2024-12-14 04:59:04.390437] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.593 [2024-12-14 04:59:04.390474] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.163 malloc1 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.163 [2024-12-14 04:59:04.992467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:54.163 [2024-12-14 04:59:04.992583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.163 [2024-12-14 04:59:04.992628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:54.163 [2024-12-14 04:59:04.992696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.163 [2024-12-14 04:59:04.994795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.163 [2024-12-14 04:59:04.994870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:54.163 pt1 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:54.163 04:59:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:54.163 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.163 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.163 malloc2 00:09:54.163 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.163 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:54.163 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.163 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.163 [2024-12-14 04:59:05.033067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:54.163 [2024-12-14 04:59:05.033124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.163 [2024-12-14 04:59:05.033143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:54.163 [2024-12-14 04:59:05.033166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.163 [2024-12-14 04:59:05.035413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.163 [2024-12-14 04:59:05.035451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:54.163 pt2 00:09:54.163 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.163 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:54.163 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.163 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:54.163 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:54.163 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:54.163 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:54.163 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:54.163 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:54.163 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:54.163 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.163 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.424 malloc3 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.424 [2024-12-14 04:59:05.061648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:54.424 [2024-12-14 04:59:05.061712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.424 [2024-12-14 04:59:05.061729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:54.424 [2024-12-14 04:59:05.061740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.424 [2024-12-14 04:59:05.063764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.424 [2024-12-14 04:59:05.063803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:54.424 pt3 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.424 malloc4 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.424 [2024-12-14 04:59:05.090146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:54.424 [2024-12-14 04:59:05.090207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.424 [2024-12-14 04:59:05.090222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:54.424 [2024-12-14 04:59:05.090234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.424 [2024-12-14 04:59:05.092320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.424 [2024-12-14 04:59:05.092361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:54.424 pt4 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.424 [2024-12-14 04:59:05.102200] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:54.424 [2024-12-14 04:59:05.103991] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:54.424 [2024-12-14 04:59:05.104052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:54.424 [2024-12-14 04:59:05.104113] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:54.424 [2024-12-14 04:59:05.104287] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:54.424 [2024-12-14 04:59:05.104317] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:54.424 [2024-12-14 04:59:05.104566] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:54.424 [2024-12-14 04:59:05.104723] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:54.424 [2024-12-14 04:59:05.104745] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:54.424 [2024-12-14 04:59:05.104890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.424 "name": "raid_bdev1", 00:09:54.424 "uuid": "1b70088f-80c6-4712-b684-96ea9fe5986a", 00:09:54.424 "strip_size_kb": 64, 00:09:54.424 "state": "online", 00:09:54.424 "raid_level": "concat", 00:09:54.424 "superblock": true, 00:09:54.424 "num_base_bdevs": 4, 00:09:54.424 "num_base_bdevs_discovered": 4, 00:09:54.424 "num_base_bdevs_operational": 4, 00:09:54.424 "base_bdevs_list": [ 00:09:54.424 { 00:09:54.424 "name": "pt1", 00:09:54.424 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.424 "is_configured": true, 00:09:54.424 "data_offset": 2048, 00:09:54.424 "data_size": 63488 00:09:54.424 }, 00:09:54.424 { 00:09:54.424 "name": "pt2", 00:09:54.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.424 "is_configured": true, 00:09:54.424 "data_offset": 2048, 00:09:54.424 "data_size": 63488 00:09:54.424 }, 00:09:54.424 { 00:09:54.424 "name": "pt3", 00:09:54.424 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:54.424 "is_configured": true, 00:09:54.424 "data_offset": 2048, 00:09:54.424 "data_size": 63488 00:09:54.424 }, 00:09:54.424 { 00:09:54.424 "name": "pt4", 00:09:54.424 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:54.424 "is_configured": true, 00:09:54.424 "data_offset": 2048, 00:09:54.424 "data_size": 63488 00:09:54.424 } 00:09:54.424 ] 00:09:54.424 }' 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.424 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.683 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:54.683 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:54.683 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:54.683 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:54.683 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:54.683 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:54.683 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:54.683 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.683 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:54.683 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.683 [2024-12-14 04:59:05.513729] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:54.683 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.683 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:54.683 "name": "raid_bdev1", 00:09:54.683 "aliases": [ 00:09:54.683 "1b70088f-80c6-4712-b684-96ea9fe5986a" 00:09:54.683 ], 00:09:54.683 "product_name": "Raid Volume", 00:09:54.683 "block_size": 512, 00:09:54.683 "num_blocks": 253952, 00:09:54.683 "uuid": "1b70088f-80c6-4712-b684-96ea9fe5986a", 00:09:54.683 "assigned_rate_limits": { 00:09:54.683 "rw_ios_per_sec": 0, 00:09:54.683 "rw_mbytes_per_sec": 0, 00:09:54.683 "r_mbytes_per_sec": 0, 00:09:54.683 "w_mbytes_per_sec": 0 00:09:54.683 }, 00:09:54.683 "claimed": false, 00:09:54.683 "zoned": false, 00:09:54.683 "supported_io_types": { 00:09:54.683 "read": true, 00:09:54.683 "write": true, 00:09:54.683 "unmap": true, 00:09:54.683 "flush": true, 00:09:54.683 "reset": true, 00:09:54.683 "nvme_admin": false, 00:09:54.683 "nvme_io": false, 00:09:54.683 "nvme_io_md": false, 00:09:54.683 "write_zeroes": true, 00:09:54.683 "zcopy": false, 00:09:54.683 "get_zone_info": false, 00:09:54.683 "zone_management": false, 00:09:54.683 "zone_append": false, 00:09:54.683 "compare": false, 00:09:54.683 "compare_and_write": false, 00:09:54.683 "abort": false, 00:09:54.683 "seek_hole": false, 00:09:54.683 "seek_data": false, 00:09:54.683 "copy": false, 00:09:54.683 "nvme_iov_md": false 00:09:54.683 }, 00:09:54.683 "memory_domains": [ 00:09:54.683 { 00:09:54.683 "dma_device_id": "system", 00:09:54.683 "dma_device_type": 1 00:09:54.683 }, 00:09:54.683 { 00:09:54.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.683 "dma_device_type": 2 00:09:54.683 }, 00:09:54.683 { 00:09:54.683 "dma_device_id": "system", 00:09:54.683 "dma_device_type": 1 00:09:54.683 }, 00:09:54.683 { 00:09:54.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.683 "dma_device_type": 2 00:09:54.683 }, 00:09:54.683 { 00:09:54.683 "dma_device_id": "system", 00:09:54.683 "dma_device_type": 1 00:09:54.683 }, 00:09:54.683 { 00:09:54.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.683 "dma_device_type": 2 00:09:54.683 }, 00:09:54.683 { 00:09:54.683 "dma_device_id": "system", 00:09:54.683 "dma_device_type": 1 00:09:54.683 }, 00:09:54.683 { 00:09:54.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.683 "dma_device_type": 2 00:09:54.683 } 00:09:54.683 ], 00:09:54.683 "driver_specific": { 00:09:54.683 "raid": { 00:09:54.683 "uuid": "1b70088f-80c6-4712-b684-96ea9fe5986a", 00:09:54.683 "strip_size_kb": 64, 00:09:54.683 "state": "online", 00:09:54.683 "raid_level": "concat", 00:09:54.683 "superblock": true, 00:09:54.683 "num_base_bdevs": 4, 00:09:54.683 "num_base_bdevs_discovered": 4, 00:09:54.683 "num_base_bdevs_operational": 4, 00:09:54.683 "base_bdevs_list": [ 00:09:54.683 { 00:09:54.683 "name": "pt1", 00:09:54.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.683 "is_configured": true, 00:09:54.683 "data_offset": 2048, 00:09:54.683 "data_size": 63488 00:09:54.683 }, 00:09:54.683 { 00:09:54.683 "name": "pt2", 00:09:54.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.683 "is_configured": true, 00:09:54.683 "data_offset": 2048, 00:09:54.683 "data_size": 63488 00:09:54.683 }, 00:09:54.683 { 00:09:54.683 "name": "pt3", 00:09:54.683 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:54.683 "is_configured": true, 00:09:54.683 "data_offset": 2048, 00:09:54.683 "data_size": 63488 00:09:54.683 }, 00:09:54.683 { 00:09:54.683 "name": "pt4", 00:09:54.683 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:54.683 "is_configured": true, 00:09:54.683 "data_offset": 2048, 00:09:54.683 "data_size": 63488 00:09:54.683 } 00:09:54.683 ] 00:09:54.683 } 00:09:54.683 } 00:09:54.683 }' 00:09:54.683 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:54.943 pt2 00:09:54.943 pt3 00:09:54.943 pt4' 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.943 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.203 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.204 [2024-12-14 04:59:05.845127] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1b70088f-80c6-4712-b684-96ea9fe5986a 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1b70088f-80c6-4712-b684-96ea9fe5986a ']' 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.204 [2024-12-14 04:59:05.896766] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:55.204 [2024-12-14 04:59:05.896802] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.204 [2024-12-14 04:59:05.896881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.204 [2024-12-14 04:59:05.896962] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.204 [2024-12-14 04:59:05.896982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.204 04:59:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.204 [2024-12-14 04:59:06.052539] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:55.204 [2024-12-14 04:59:06.054443] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:55.204 [2024-12-14 04:59:06.054494] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:55.204 [2024-12-14 04:59:06.054525] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:55.204 [2024-12-14 04:59:06.054571] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:55.204 [2024-12-14 04:59:06.054608] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:55.204 [2024-12-14 04:59:06.054627] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:55.204 [2024-12-14 04:59:06.054644] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:55.204 [2024-12-14 04:59:06.054658] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:55.204 [2024-12-14 04:59:06.054668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:55.204 request: 00:09:55.204 { 00:09:55.204 "name": "raid_bdev1", 00:09:55.204 "raid_level": "concat", 00:09:55.204 "base_bdevs": [ 00:09:55.204 "malloc1", 00:09:55.204 "malloc2", 00:09:55.204 "malloc3", 00:09:55.204 "malloc4" 00:09:55.204 ], 00:09:55.204 "strip_size_kb": 64, 00:09:55.204 "superblock": false, 00:09:55.204 "method": "bdev_raid_create", 00:09:55.204 "req_id": 1 00:09:55.204 } 00:09:55.204 Got JSON-RPC error response 00:09:55.204 response: 00:09:55.204 { 00:09:55.204 "code": -17, 00:09:55.204 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:55.204 } 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:55.204 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.464 [2024-12-14 04:59:06.116396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:55.464 [2024-12-14 04:59:06.116440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.464 [2024-12-14 04:59:06.116459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:55.464 [2024-12-14 04:59:06.116467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.464 [2024-12-14 04:59:06.118520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.464 [2024-12-14 04:59:06.118553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:55.464 [2024-12-14 04:59:06.118621] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:55.464 [2024-12-14 04:59:06.118689] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:55.464 pt1 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.464 "name": "raid_bdev1", 00:09:55.464 "uuid": "1b70088f-80c6-4712-b684-96ea9fe5986a", 00:09:55.464 "strip_size_kb": 64, 00:09:55.464 "state": "configuring", 00:09:55.464 "raid_level": "concat", 00:09:55.464 "superblock": true, 00:09:55.464 "num_base_bdevs": 4, 00:09:55.464 "num_base_bdevs_discovered": 1, 00:09:55.464 "num_base_bdevs_operational": 4, 00:09:55.464 "base_bdevs_list": [ 00:09:55.464 { 00:09:55.464 "name": "pt1", 00:09:55.464 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.464 "is_configured": true, 00:09:55.464 "data_offset": 2048, 00:09:55.464 "data_size": 63488 00:09:55.464 }, 00:09:55.464 { 00:09:55.464 "name": null, 00:09:55.464 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.464 "is_configured": false, 00:09:55.464 "data_offset": 2048, 00:09:55.464 "data_size": 63488 00:09:55.464 }, 00:09:55.464 { 00:09:55.464 "name": null, 00:09:55.464 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:55.464 "is_configured": false, 00:09:55.464 "data_offset": 2048, 00:09:55.464 "data_size": 63488 00:09:55.464 }, 00:09:55.464 { 00:09:55.464 "name": null, 00:09:55.464 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:55.464 "is_configured": false, 00:09:55.464 "data_offset": 2048, 00:09:55.464 "data_size": 63488 00:09:55.464 } 00:09:55.464 ] 00:09:55.464 }' 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.464 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.724 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:55.724 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:55.724 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.724 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.724 [2024-12-14 04:59:06.563598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:55.724 [2024-12-14 04:59:06.563646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.724 [2024-12-14 04:59:06.563664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:09:55.724 [2024-12-14 04:59:06.563672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.724 [2024-12-14 04:59:06.564043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.724 [2024-12-14 04:59:06.564068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:55.724 [2024-12-14 04:59:06.564138] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:55.724 [2024-12-14 04:59:06.564169] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:55.724 pt2 00:09:55.724 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.724 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:55.724 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.724 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.724 [2024-12-14 04:59:06.575592] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:55.724 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.724 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:09:55.724 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.724 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.724 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.724 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.724 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.725 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.725 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.725 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.725 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.725 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.725 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.725 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.725 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.984 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.984 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.984 "name": "raid_bdev1", 00:09:55.984 "uuid": "1b70088f-80c6-4712-b684-96ea9fe5986a", 00:09:55.984 "strip_size_kb": 64, 00:09:55.984 "state": "configuring", 00:09:55.984 "raid_level": "concat", 00:09:55.984 "superblock": true, 00:09:55.984 "num_base_bdevs": 4, 00:09:55.984 "num_base_bdevs_discovered": 1, 00:09:55.984 "num_base_bdevs_operational": 4, 00:09:55.984 "base_bdevs_list": [ 00:09:55.984 { 00:09:55.984 "name": "pt1", 00:09:55.984 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.984 "is_configured": true, 00:09:55.984 "data_offset": 2048, 00:09:55.984 "data_size": 63488 00:09:55.985 }, 00:09:55.985 { 00:09:55.985 "name": null, 00:09:55.985 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.985 "is_configured": false, 00:09:55.985 "data_offset": 0, 00:09:55.985 "data_size": 63488 00:09:55.985 }, 00:09:55.985 { 00:09:55.985 "name": null, 00:09:55.985 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:55.985 "is_configured": false, 00:09:55.985 "data_offset": 2048, 00:09:55.985 "data_size": 63488 00:09:55.985 }, 00:09:55.985 { 00:09:55.985 "name": null, 00:09:55.985 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:55.985 "is_configured": false, 00:09:55.985 "data_offset": 2048, 00:09:55.985 "data_size": 63488 00:09:55.985 } 00:09:55.985 ] 00:09:55.985 }' 00:09:55.985 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.985 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.245 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:56.245 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:56.245 04:59:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:56.245 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.245 04:59:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.245 [2024-12-14 04:59:06.999024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:56.245 [2024-12-14 04:59:06.999076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.245 [2024-12-14 04:59:06.999090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:56.245 [2024-12-14 04:59:06.999100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.245 [2024-12-14 04:59:06.999484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.245 [2024-12-14 04:59:06.999504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:56.245 [2024-12-14 04:59:06.999567] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:56.245 [2024-12-14 04:59:06.999588] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:56.245 pt2 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.245 [2024-12-14 04:59:07.010975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:56.245 [2024-12-14 04:59:07.011026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.245 [2024-12-14 04:59:07.011042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:56.245 [2024-12-14 04:59:07.011052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.245 [2024-12-14 04:59:07.011408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.245 [2024-12-14 04:59:07.011436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:56.245 [2024-12-14 04:59:07.011491] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:56.245 [2024-12-14 04:59:07.011511] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:56.245 pt3 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.245 [2024-12-14 04:59:07.022973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:56.245 [2024-12-14 04:59:07.023021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.245 [2024-12-14 04:59:07.023035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:09:56.245 [2024-12-14 04:59:07.023043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.245 [2024-12-14 04:59:07.023366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.245 [2024-12-14 04:59:07.023384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:56.245 [2024-12-14 04:59:07.023433] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:56.245 [2024-12-14 04:59:07.023451] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:56.245 [2024-12-14 04:59:07.023544] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:56.245 [2024-12-14 04:59:07.023556] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:56.245 [2024-12-14 04:59:07.023767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:56.245 [2024-12-14 04:59:07.023904] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:56.245 [2024-12-14 04:59:07.023922] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:56.245 [2024-12-14 04:59:07.024026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.245 pt4 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.245 "name": "raid_bdev1", 00:09:56.245 "uuid": "1b70088f-80c6-4712-b684-96ea9fe5986a", 00:09:56.245 "strip_size_kb": 64, 00:09:56.245 "state": "online", 00:09:56.245 "raid_level": "concat", 00:09:56.245 "superblock": true, 00:09:56.245 "num_base_bdevs": 4, 00:09:56.245 "num_base_bdevs_discovered": 4, 00:09:56.245 "num_base_bdevs_operational": 4, 00:09:56.245 "base_bdevs_list": [ 00:09:56.245 { 00:09:56.245 "name": "pt1", 00:09:56.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:56.245 "is_configured": true, 00:09:56.245 "data_offset": 2048, 00:09:56.245 "data_size": 63488 00:09:56.245 }, 00:09:56.245 { 00:09:56.245 "name": "pt2", 00:09:56.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.245 "is_configured": true, 00:09:56.245 "data_offset": 2048, 00:09:56.245 "data_size": 63488 00:09:56.245 }, 00:09:56.245 { 00:09:56.245 "name": "pt3", 00:09:56.245 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:56.245 "is_configured": true, 00:09:56.245 "data_offset": 2048, 00:09:56.245 "data_size": 63488 00:09:56.245 }, 00:09:56.245 { 00:09:56.245 "name": "pt4", 00:09:56.245 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:56.245 "is_configured": true, 00:09:56.245 "data_offset": 2048, 00:09:56.245 "data_size": 63488 00:09:56.245 } 00:09:56.245 ] 00:09:56.245 }' 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.245 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.504 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:56.504 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:56.504 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:56.504 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:56.504 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:56.505 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:56.505 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:56.505 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:56.505 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.505 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.505 [2024-12-14 04:59:07.362686] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.764 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.764 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:56.764 "name": "raid_bdev1", 00:09:56.764 "aliases": [ 00:09:56.764 "1b70088f-80c6-4712-b684-96ea9fe5986a" 00:09:56.764 ], 00:09:56.764 "product_name": "Raid Volume", 00:09:56.764 "block_size": 512, 00:09:56.764 "num_blocks": 253952, 00:09:56.764 "uuid": "1b70088f-80c6-4712-b684-96ea9fe5986a", 00:09:56.764 "assigned_rate_limits": { 00:09:56.764 "rw_ios_per_sec": 0, 00:09:56.764 "rw_mbytes_per_sec": 0, 00:09:56.764 "r_mbytes_per_sec": 0, 00:09:56.764 "w_mbytes_per_sec": 0 00:09:56.764 }, 00:09:56.764 "claimed": false, 00:09:56.764 "zoned": false, 00:09:56.764 "supported_io_types": { 00:09:56.764 "read": true, 00:09:56.764 "write": true, 00:09:56.764 "unmap": true, 00:09:56.764 "flush": true, 00:09:56.764 "reset": true, 00:09:56.764 "nvme_admin": false, 00:09:56.764 "nvme_io": false, 00:09:56.764 "nvme_io_md": false, 00:09:56.764 "write_zeroes": true, 00:09:56.764 "zcopy": false, 00:09:56.764 "get_zone_info": false, 00:09:56.764 "zone_management": false, 00:09:56.764 "zone_append": false, 00:09:56.764 "compare": false, 00:09:56.764 "compare_and_write": false, 00:09:56.764 "abort": false, 00:09:56.764 "seek_hole": false, 00:09:56.764 "seek_data": false, 00:09:56.764 "copy": false, 00:09:56.764 "nvme_iov_md": false 00:09:56.764 }, 00:09:56.764 "memory_domains": [ 00:09:56.764 { 00:09:56.764 "dma_device_id": "system", 00:09:56.764 "dma_device_type": 1 00:09:56.764 }, 00:09:56.764 { 00:09:56.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.764 "dma_device_type": 2 00:09:56.764 }, 00:09:56.764 { 00:09:56.764 "dma_device_id": "system", 00:09:56.765 "dma_device_type": 1 00:09:56.765 }, 00:09:56.765 { 00:09:56.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.765 "dma_device_type": 2 00:09:56.765 }, 00:09:56.765 { 00:09:56.765 "dma_device_id": "system", 00:09:56.765 "dma_device_type": 1 00:09:56.765 }, 00:09:56.765 { 00:09:56.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.765 "dma_device_type": 2 00:09:56.765 }, 00:09:56.765 { 00:09:56.765 "dma_device_id": "system", 00:09:56.765 "dma_device_type": 1 00:09:56.765 }, 00:09:56.765 { 00:09:56.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.765 "dma_device_type": 2 00:09:56.765 } 00:09:56.765 ], 00:09:56.765 "driver_specific": { 00:09:56.765 "raid": { 00:09:56.765 "uuid": "1b70088f-80c6-4712-b684-96ea9fe5986a", 00:09:56.765 "strip_size_kb": 64, 00:09:56.765 "state": "online", 00:09:56.765 "raid_level": "concat", 00:09:56.765 "superblock": true, 00:09:56.765 "num_base_bdevs": 4, 00:09:56.765 "num_base_bdevs_discovered": 4, 00:09:56.765 "num_base_bdevs_operational": 4, 00:09:56.765 "base_bdevs_list": [ 00:09:56.765 { 00:09:56.765 "name": "pt1", 00:09:56.765 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:56.765 "is_configured": true, 00:09:56.765 "data_offset": 2048, 00:09:56.765 "data_size": 63488 00:09:56.765 }, 00:09:56.765 { 00:09:56.765 "name": "pt2", 00:09:56.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.765 "is_configured": true, 00:09:56.765 "data_offset": 2048, 00:09:56.765 "data_size": 63488 00:09:56.765 }, 00:09:56.765 { 00:09:56.765 "name": "pt3", 00:09:56.765 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:56.765 "is_configured": true, 00:09:56.765 "data_offset": 2048, 00:09:56.765 "data_size": 63488 00:09:56.765 }, 00:09:56.765 { 00:09:56.765 "name": "pt4", 00:09:56.765 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:56.765 "is_configured": true, 00:09:56.765 "data_offset": 2048, 00:09:56.765 "data_size": 63488 00:09:56.765 } 00:09:56.765 ] 00:09:56.765 } 00:09:56.765 } 00:09:56.765 }' 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:56.765 pt2 00:09:56.765 pt3 00:09:56.765 pt4' 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.765 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:57.025 [2024-12-14 04:59:07.690140] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1b70088f-80c6-4712-b684-96ea9fe5986a '!=' 1b70088f-80c6-4712-b684-96ea9fe5986a ']' 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83475 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 83475 ']' 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 83475 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83475 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:57.025 killing process with pid 83475 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83475' 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 83475 00:09:57.025 [2024-12-14 04:59:07.777090] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:57.025 [2024-12-14 04:59:07.777208] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.025 [2024-12-14 04:59:07.777289] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.025 [2024-12-14 04:59:07.777308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:57.025 04:59:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 83475 00:09:57.025 [2024-12-14 04:59:07.819872] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.286 04:59:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:57.286 00:09:57.286 real 0m4.005s 00:09:57.286 user 0m6.259s 00:09:57.286 sys 0m0.906s 00:09:57.286 04:59:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.286 04:59:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.286 ************************************ 00:09:57.286 END TEST raid_superblock_test 00:09:57.286 ************************************ 00:09:57.286 04:59:08 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:09:57.286 04:59:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:57.286 04:59:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.286 04:59:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:57.286 ************************************ 00:09:57.286 START TEST raid_read_error_test 00:09:57.286 ************************************ 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YPZZpPcGuN 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83718 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83718 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 83718 ']' 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:57.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:57.286 04:59:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.547 [2024-12-14 04:59:08.242771] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:57.547 [2024-12-14 04:59:08.242884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83718 ] 00:09:57.547 [2024-12-14 04:59:08.387786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.806 [2024-12-14 04:59:08.433035] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.806 [2024-12-14 04:59:08.474758] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.806 [2024-12-14 04:59:08.474799] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.375 BaseBdev1_malloc 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.375 true 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.375 [2024-12-14 04:59:09.088632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:58.375 [2024-12-14 04:59:09.088691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.375 [2024-12-14 04:59:09.088712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:58.375 [2024-12-14 04:59:09.088721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.375 [2024-12-14 04:59:09.090764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.375 [2024-12-14 04:59:09.090800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:58.375 BaseBdev1 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.375 BaseBdev2_malloc 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.375 true 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.375 [2024-12-14 04:59:09.142068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:58.375 [2024-12-14 04:59:09.142133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.375 [2024-12-14 04:59:09.142177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:58.375 [2024-12-14 04:59:09.142192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.375 [2024-12-14 04:59:09.145262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.375 [2024-12-14 04:59:09.145316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:58.375 BaseBdev2 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.375 BaseBdev3_malloc 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.375 true 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.375 [2024-12-14 04:59:09.182716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:58.375 [2024-12-14 04:59:09.182758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.375 [2024-12-14 04:59:09.182774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:58.375 [2024-12-14 04:59:09.182783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.375 [2024-12-14 04:59:09.184795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.375 [2024-12-14 04:59:09.184830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:58.375 BaseBdev3 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.375 BaseBdev4_malloc 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.375 true 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.375 [2024-12-14 04:59:09.223155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:58.375 [2024-12-14 04:59:09.223213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.375 [2024-12-14 04:59:09.223250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:58.375 [2024-12-14 04:59:09.223258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.375 [2024-12-14 04:59:09.225260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.375 [2024-12-14 04:59:09.225291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:58.375 BaseBdev4 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.375 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.375 [2024-12-14 04:59:09.235187] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.375 [2024-12-14 04:59:09.237005] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.375 [2024-12-14 04:59:09.237091] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:58.375 [2024-12-14 04:59:09.237157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:58.375 [2024-12-14 04:59:09.237402] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:09:58.375 [2024-12-14 04:59:09.237425] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:58.376 [2024-12-14 04:59:09.237679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:58.376 [2024-12-14 04:59:09.237850] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:09:58.376 [2024-12-14 04:59:09.237874] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:09:58.376 [2024-12-14 04:59:09.238008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.376 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.376 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:58.376 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.376 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.376 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.376 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.376 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.376 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.376 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.376 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.376 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.376 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.376 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.376 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.376 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.635 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.635 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.635 "name": "raid_bdev1", 00:09:58.635 "uuid": "9009eb6e-011f-4643-84ee-4473c5e9f7c4", 00:09:58.635 "strip_size_kb": 64, 00:09:58.635 "state": "online", 00:09:58.635 "raid_level": "concat", 00:09:58.635 "superblock": true, 00:09:58.635 "num_base_bdevs": 4, 00:09:58.635 "num_base_bdevs_discovered": 4, 00:09:58.635 "num_base_bdevs_operational": 4, 00:09:58.635 "base_bdevs_list": [ 00:09:58.635 { 00:09:58.635 "name": "BaseBdev1", 00:09:58.635 "uuid": "38be4099-e1bd-51bb-8e38-5be5fcd95aad", 00:09:58.635 "is_configured": true, 00:09:58.635 "data_offset": 2048, 00:09:58.635 "data_size": 63488 00:09:58.635 }, 00:09:58.635 { 00:09:58.635 "name": "BaseBdev2", 00:09:58.635 "uuid": "837704c4-34fc-5451-837f-bcb98e40785d", 00:09:58.635 "is_configured": true, 00:09:58.635 "data_offset": 2048, 00:09:58.635 "data_size": 63488 00:09:58.635 }, 00:09:58.635 { 00:09:58.635 "name": "BaseBdev3", 00:09:58.635 "uuid": "2bbb7a8e-973a-5be3-bf43-ff6594391d99", 00:09:58.635 "is_configured": true, 00:09:58.635 "data_offset": 2048, 00:09:58.635 "data_size": 63488 00:09:58.635 }, 00:09:58.635 { 00:09:58.635 "name": "BaseBdev4", 00:09:58.635 "uuid": "f3e1722d-8dda-5891-b149-e6ba6c4b04a6", 00:09:58.635 "is_configured": true, 00:09:58.635 "data_offset": 2048, 00:09:58.635 "data_size": 63488 00:09:58.635 } 00:09:58.635 ] 00:09:58.635 }' 00:09:58.635 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.635 04:59:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.895 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:58.895 04:59:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:58.895 [2024-12-14 04:59:09.710716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.835 "name": "raid_bdev1", 00:09:59.835 "uuid": "9009eb6e-011f-4643-84ee-4473c5e9f7c4", 00:09:59.835 "strip_size_kb": 64, 00:09:59.835 "state": "online", 00:09:59.835 "raid_level": "concat", 00:09:59.835 "superblock": true, 00:09:59.835 "num_base_bdevs": 4, 00:09:59.835 "num_base_bdevs_discovered": 4, 00:09:59.835 "num_base_bdevs_operational": 4, 00:09:59.835 "base_bdevs_list": [ 00:09:59.835 { 00:09:59.835 "name": "BaseBdev1", 00:09:59.835 "uuid": "38be4099-e1bd-51bb-8e38-5be5fcd95aad", 00:09:59.835 "is_configured": true, 00:09:59.835 "data_offset": 2048, 00:09:59.835 "data_size": 63488 00:09:59.835 }, 00:09:59.835 { 00:09:59.835 "name": "BaseBdev2", 00:09:59.835 "uuid": "837704c4-34fc-5451-837f-bcb98e40785d", 00:09:59.835 "is_configured": true, 00:09:59.835 "data_offset": 2048, 00:09:59.835 "data_size": 63488 00:09:59.835 }, 00:09:59.835 { 00:09:59.835 "name": "BaseBdev3", 00:09:59.835 "uuid": "2bbb7a8e-973a-5be3-bf43-ff6594391d99", 00:09:59.835 "is_configured": true, 00:09:59.835 "data_offset": 2048, 00:09:59.835 "data_size": 63488 00:09:59.835 }, 00:09:59.835 { 00:09:59.835 "name": "BaseBdev4", 00:09:59.835 "uuid": "f3e1722d-8dda-5891-b149-e6ba6c4b04a6", 00:09:59.835 "is_configured": true, 00:09:59.835 "data_offset": 2048, 00:09:59.835 "data_size": 63488 00:09:59.835 } 00:09:59.835 ] 00:09:59.835 }' 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.835 04:59:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.405 04:59:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:00.405 04:59:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.405 04:59:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.405 [2024-12-14 04:59:11.114611] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:00.405 [2024-12-14 04:59:11.114646] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:00.405 [2024-12-14 04:59:11.117080] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.405 [2024-12-14 04:59:11.117144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.405 [2024-12-14 04:59:11.117221] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.405 [2024-12-14 04:59:11.117235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:00.405 { 00:10:00.405 "results": [ 00:10:00.405 { 00:10:00.405 "job": "raid_bdev1", 00:10:00.405 "core_mask": "0x1", 00:10:00.405 "workload": "randrw", 00:10:00.405 "percentage": 50, 00:10:00.405 "status": "finished", 00:10:00.405 "queue_depth": 1, 00:10:00.405 "io_size": 131072, 00:10:00.405 "runtime": 1.404693, 00:10:00.405 "iops": 17280.644240414098, 00:10:00.405 "mibps": 2160.0805300517623, 00:10:00.405 "io_failed": 1, 00:10:00.405 "io_timeout": 0, 00:10:00.405 "avg_latency_us": 80.35693544223531, 00:10:00.405 "min_latency_us": 24.482096069868994, 00:10:00.405 "max_latency_us": 1452.380786026201 00:10:00.405 } 00:10:00.405 ], 00:10:00.405 "core_count": 1 00:10:00.405 } 00:10:00.405 04:59:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.405 04:59:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83718 00:10:00.405 04:59:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 83718 ']' 00:10:00.405 04:59:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 83718 00:10:00.405 04:59:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:00.405 04:59:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:00.405 04:59:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83718 00:10:00.405 04:59:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:00.405 04:59:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:00.405 killing process with pid 83718 00:10:00.405 04:59:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83718' 00:10:00.405 04:59:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 83718 00:10:00.405 [2024-12-14 04:59:11.153435] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:00.405 04:59:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 83718 00:10:00.405 [2024-12-14 04:59:11.188079] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:00.666 04:59:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YPZZpPcGuN 00:10:00.666 04:59:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:00.666 04:59:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:00.666 04:59:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:00.666 04:59:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:00.666 04:59:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:00.666 04:59:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:00.666 04:59:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:00.666 00:10:00.666 real 0m3.298s 00:10:00.666 user 0m4.093s 00:10:00.666 sys 0m0.568s 00:10:00.666 04:59:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:00.666 04:59:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.666 ************************************ 00:10:00.666 END TEST raid_read_error_test 00:10:00.666 ************************************ 00:10:00.666 04:59:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:00.666 04:59:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:00.666 04:59:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:00.666 04:59:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:00.666 ************************************ 00:10:00.666 START TEST raid_write_error_test 00:10:00.666 ************************************ 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2UPIIT2BNW 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83852 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83852 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 83852 ']' 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:00.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:00.666 04:59:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.926 [2024-12-14 04:59:11.609469] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:00.926 [2024-12-14 04:59:11.609602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83852 ] 00:10:00.926 [2024-12-14 04:59:11.771848] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.185 [2024-12-14 04:59:11.818328] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.185 [2024-12-14 04:59:11.860258] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.185 [2024-12-14 04:59:11.860301] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.756 BaseBdev1_malloc 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.756 true 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.756 [2024-12-14 04:59:12.458289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:01.756 [2024-12-14 04:59:12.458367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.756 [2024-12-14 04:59:12.458389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:01.756 [2024-12-14 04:59:12.458397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.756 [2024-12-14 04:59:12.460445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.756 [2024-12-14 04:59:12.460486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:01.756 BaseBdev1 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.756 BaseBdev2_malloc 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.756 true 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.756 [2024-12-14 04:59:12.515874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:01.756 [2024-12-14 04:59:12.515940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.756 [2024-12-14 04:59:12.515968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:01.756 [2024-12-14 04:59:12.515981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.756 [2024-12-14 04:59:12.519044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.756 [2024-12-14 04:59:12.519093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:01.756 BaseBdev2 00:10:01.756 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.757 BaseBdev3_malloc 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.757 true 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.757 [2024-12-14 04:59:12.556804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:01.757 [2024-12-14 04:59:12.556850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.757 [2024-12-14 04:59:12.556869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:01.757 [2024-12-14 04:59:12.556878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.757 [2024-12-14 04:59:12.558963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.757 [2024-12-14 04:59:12.559000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:01.757 BaseBdev3 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.757 BaseBdev4_malloc 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.757 true 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.757 [2024-12-14 04:59:12.597298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:01.757 [2024-12-14 04:59:12.597342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.757 [2024-12-14 04:59:12.597362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:01.757 [2024-12-14 04:59:12.597371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.757 [2024-12-14 04:59:12.599355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.757 [2024-12-14 04:59:12.599390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:01.757 BaseBdev4 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.757 [2024-12-14 04:59:12.609333] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.757 [2024-12-14 04:59:12.611118] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:01.757 [2024-12-14 04:59:12.611249] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:01.757 [2024-12-14 04:59:12.611304] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:01.757 [2024-12-14 04:59:12.611540] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:01.757 [2024-12-14 04:59:12.611564] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:01.757 [2024-12-14 04:59:12.611819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:01.757 [2024-12-14 04:59:12.611969] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:01.757 [2024-12-14 04:59:12.611992] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:01.757 [2024-12-14 04:59:12.612136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.757 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.017 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.017 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.017 "name": "raid_bdev1", 00:10:02.017 "uuid": "3cf8da81-c55d-41c2-b1b9-9afa2e0991c7", 00:10:02.017 "strip_size_kb": 64, 00:10:02.017 "state": "online", 00:10:02.017 "raid_level": "concat", 00:10:02.017 "superblock": true, 00:10:02.017 "num_base_bdevs": 4, 00:10:02.017 "num_base_bdevs_discovered": 4, 00:10:02.017 "num_base_bdevs_operational": 4, 00:10:02.017 "base_bdevs_list": [ 00:10:02.017 { 00:10:02.017 "name": "BaseBdev1", 00:10:02.017 "uuid": "dacdc9d3-73c4-51bc-afc3-c323610e8f43", 00:10:02.017 "is_configured": true, 00:10:02.017 "data_offset": 2048, 00:10:02.017 "data_size": 63488 00:10:02.017 }, 00:10:02.017 { 00:10:02.017 "name": "BaseBdev2", 00:10:02.017 "uuid": "9f961cc5-d52a-5e08-ab89-6e5344da6132", 00:10:02.017 "is_configured": true, 00:10:02.017 "data_offset": 2048, 00:10:02.017 "data_size": 63488 00:10:02.017 }, 00:10:02.017 { 00:10:02.017 "name": "BaseBdev3", 00:10:02.017 "uuid": "61cea084-66fc-5d30-bb9f-0812b8e76627", 00:10:02.017 "is_configured": true, 00:10:02.017 "data_offset": 2048, 00:10:02.017 "data_size": 63488 00:10:02.017 }, 00:10:02.017 { 00:10:02.017 "name": "BaseBdev4", 00:10:02.017 "uuid": "ebe62c36-e70e-5382-8b7a-089168202e9c", 00:10:02.017 "is_configured": true, 00:10:02.017 "data_offset": 2048, 00:10:02.017 "data_size": 63488 00:10:02.017 } 00:10:02.017 ] 00:10:02.017 }' 00:10:02.017 04:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.017 04:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.277 04:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:02.277 04:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:02.277 [2024-12-14 04:59:13.096818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.216 "name": "raid_bdev1", 00:10:03.216 "uuid": "3cf8da81-c55d-41c2-b1b9-9afa2e0991c7", 00:10:03.216 "strip_size_kb": 64, 00:10:03.216 "state": "online", 00:10:03.216 "raid_level": "concat", 00:10:03.216 "superblock": true, 00:10:03.216 "num_base_bdevs": 4, 00:10:03.216 "num_base_bdevs_discovered": 4, 00:10:03.216 "num_base_bdevs_operational": 4, 00:10:03.216 "base_bdevs_list": [ 00:10:03.216 { 00:10:03.216 "name": "BaseBdev1", 00:10:03.216 "uuid": "dacdc9d3-73c4-51bc-afc3-c323610e8f43", 00:10:03.216 "is_configured": true, 00:10:03.216 "data_offset": 2048, 00:10:03.216 "data_size": 63488 00:10:03.216 }, 00:10:03.216 { 00:10:03.216 "name": "BaseBdev2", 00:10:03.216 "uuid": "9f961cc5-d52a-5e08-ab89-6e5344da6132", 00:10:03.216 "is_configured": true, 00:10:03.216 "data_offset": 2048, 00:10:03.216 "data_size": 63488 00:10:03.216 }, 00:10:03.216 { 00:10:03.216 "name": "BaseBdev3", 00:10:03.216 "uuid": "61cea084-66fc-5d30-bb9f-0812b8e76627", 00:10:03.216 "is_configured": true, 00:10:03.216 "data_offset": 2048, 00:10:03.216 "data_size": 63488 00:10:03.216 }, 00:10:03.216 { 00:10:03.216 "name": "BaseBdev4", 00:10:03.216 "uuid": "ebe62c36-e70e-5382-8b7a-089168202e9c", 00:10:03.216 "is_configured": true, 00:10:03.216 "data_offset": 2048, 00:10:03.216 "data_size": 63488 00:10:03.216 } 00:10:03.216 ] 00:10:03.216 }' 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.216 04:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.786 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:03.786 04:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.786 04:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.786 [2024-12-14 04:59:14.448789] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:03.786 [2024-12-14 04:59:14.448825] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:03.786 [2024-12-14 04:59:14.451257] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:03.786 [2024-12-14 04:59:14.451313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.786 [2024-12-14 04:59:14.451358] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:03.786 [2024-12-14 04:59:14.451367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:03.786 { 00:10:03.786 "results": [ 00:10:03.786 { 00:10:03.786 "job": "raid_bdev1", 00:10:03.786 "core_mask": "0x1", 00:10:03.786 "workload": "randrw", 00:10:03.786 "percentage": 50, 00:10:03.786 "status": "finished", 00:10:03.786 "queue_depth": 1, 00:10:03.786 "io_size": 131072, 00:10:03.786 "runtime": 1.352785, 00:10:03.786 "iops": 17225.20577918886, 00:10:03.786 "mibps": 2153.1507223986073, 00:10:03.786 "io_failed": 1, 00:10:03.786 "io_timeout": 0, 00:10:03.786 "avg_latency_us": 80.51201623870233, 00:10:03.786 "min_latency_us": 24.370305676855896, 00:10:03.786 "max_latency_us": 1409.4532751091704 00:10:03.786 } 00:10:03.786 ], 00:10:03.786 "core_count": 1 00:10:03.786 } 00:10:03.786 04:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.786 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83852 00:10:03.786 04:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 83852 ']' 00:10:03.786 04:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 83852 00:10:03.786 04:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:03.786 04:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.786 04:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83852 00:10:03.786 04:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:03.786 04:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:03.786 killing process with pid 83852 00:10:03.786 04:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83852' 00:10:03.786 04:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 83852 00:10:03.786 [2024-12-14 04:59:14.498922] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:03.786 04:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 83852 00:10:03.786 [2024-12-14 04:59:14.532890] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:04.047 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:04.047 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2UPIIT2BNW 00:10:04.047 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:04.047 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:04.047 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:04.047 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:04.047 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:04.047 04:59:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:04.047 00:10:04.047 real 0m3.275s 00:10:04.047 user 0m4.066s 00:10:04.047 sys 0m0.537s 00:10:04.047 04:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:04.047 04:59:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.047 ************************************ 00:10:04.047 END TEST raid_write_error_test 00:10:04.047 ************************************ 00:10:04.047 04:59:14 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:04.047 04:59:14 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:10:04.047 04:59:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:04.047 04:59:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.047 04:59:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:04.047 ************************************ 00:10:04.047 START TEST raid_state_function_test 00:10:04.047 ************************************ 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83986 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83986' 00:10:04.047 Process raid pid: 83986 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83986 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 83986 ']' 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:04.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:04.047 04:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.309 [2024-12-14 04:59:14.949735] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:04.309 [2024-12-14 04:59:14.949882] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.309 [2024-12-14 04:59:15.107854] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.309 [2024-12-14 04:59:15.152833] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.568 [2024-12-14 04:59:15.194858] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.568 [2024-12-14 04:59:15.194906] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.138 04:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:05.138 04:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:05.138 04:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:05.138 04:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.138 04:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.138 [2024-12-14 04:59:15.768237] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.138 [2024-12-14 04:59:15.768293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.138 [2024-12-14 04:59:15.768305] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.138 [2024-12-14 04:59:15.768314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.138 [2024-12-14 04:59:15.768322] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:05.138 [2024-12-14 04:59:15.768333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:05.138 [2024-12-14 04:59:15.768339] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:05.138 [2024-12-14 04:59:15.768348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:05.138 04:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.138 04:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:05.138 04:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.138 04:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.138 04:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.138 04:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.138 04:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.138 04:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.138 04:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.138 04:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.138 04:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.138 04:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.138 04:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.138 04:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.138 04:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.138 04:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.138 04:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.138 "name": "Existed_Raid", 00:10:05.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.138 "strip_size_kb": 0, 00:10:05.138 "state": "configuring", 00:10:05.138 "raid_level": "raid1", 00:10:05.138 "superblock": false, 00:10:05.138 "num_base_bdevs": 4, 00:10:05.138 "num_base_bdevs_discovered": 0, 00:10:05.138 "num_base_bdevs_operational": 4, 00:10:05.138 "base_bdevs_list": [ 00:10:05.138 { 00:10:05.138 "name": "BaseBdev1", 00:10:05.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.138 "is_configured": false, 00:10:05.138 "data_offset": 0, 00:10:05.138 "data_size": 0 00:10:05.138 }, 00:10:05.138 { 00:10:05.138 "name": "BaseBdev2", 00:10:05.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.138 "is_configured": false, 00:10:05.138 "data_offset": 0, 00:10:05.138 "data_size": 0 00:10:05.138 }, 00:10:05.138 { 00:10:05.138 "name": "BaseBdev3", 00:10:05.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.138 "is_configured": false, 00:10:05.138 "data_offset": 0, 00:10:05.138 "data_size": 0 00:10:05.138 }, 00:10:05.138 { 00:10:05.139 "name": "BaseBdev4", 00:10:05.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.139 "is_configured": false, 00:10:05.139 "data_offset": 0, 00:10:05.139 "data_size": 0 00:10:05.139 } 00:10:05.139 ] 00:10:05.139 }' 00:10:05.139 04:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.139 04:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.399 [2024-12-14 04:59:16.207373] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:05.399 [2024-12-14 04:59:16.207419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.399 [2024-12-14 04:59:16.215395] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.399 [2024-12-14 04:59:16.215434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.399 [2024-12-14 04:59:16.215442] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.399 [2024-12-14 04:59:16.215467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.399 [2024-12-14 04:59:16.215473] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:05.399 [2024-12-14 04:59:16.215481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:05.399 [2024-12-14 04:59:16.215487] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:05.399 [2024-12-14 04:59:16.215495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.399 [2024-12-14 04:59:16.232090] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.399 BaseBdev1 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.399 [ 00:10:05.399 { 00:10:05.399 "name": "BaseBdev1", 00:10:05.399 "aliases": [ 00:10:05.399 "6a43f45c-bfde-45b5-99b3-7e9dec7904fe" 00:10:05.399 ], 00:10:05.399 "product_name": "Malloc disk", 00:10:05.399 "block_size": 512, 00:10:05.399 "num_blocks": 65536, 00:10:05.399 "uuid": "6a43f45c-bfde-45b5-99b3-7e9dec7904fe", 00:10:05.399 "assigned_rate_limits": { 00:10:05.399 "rw_ios_per_sec": 0, 00:10:05.399 "rw_mbytes_per_sec": 0, 00:10:05.399 "r_mbytes_per_sec": 0, 00:10:05.399 "w_mbytes_per_sec": 0 00:10:05.399 }, 00:10:05.399 "claimed": true, 00:10:05.399 "claim_type": "exclusive_write", 00:10:05.399 "zoned": false, 00:10:05.399 "supported_io_types": { 00:10:05.399 "read": true, 00:10:05.399 "write": true, 00:10:05.399 "unmap": true, 00:10:05.399 "flush": true, 00:10:05.399 "reset": true, 00:10:05.399 "nvme_admin": false, 00:10:05.399 "nvme_io": false, 00:10:05.399 "nvme_io_md": false, 00:10:05.399 "write_zeroes": true, 00:10:05.399 "zcopy": true, 00:10:05.399 "get_zone_info": false, 00:10:05.399 "zone_management": false, 00:10:05.399 "zone_append": false, 00:10:05.399 "compare": false, 00:10:05.399 "compare_and_write": false, 00:10:05.399 "abort": true, 00:10:05.399 "seek_hole": false, 00:10:05.399 "seek_data": false, 00:10:05.399 "copy": true, 00:10:05.399 "nvme_iov_md": false 00:10:05.399 }, 00:10:05.399 "memory_domains": [ 00:10:05.399 { 00:10:05.399 "dma_device_id": "system", 00:10:05.399 "dma_device_type": 1 00:10:05.399 }, 00:10:05.399 { 00:10:05.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.399 "dma_device_type": 2 00:10:05.399 } 00:10:05.399 ], 00:10:05.399 "driver_specific": {} 00:10:05.399 } 00:10:05.399 ] 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.399 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.659 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.659 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.659 "name": "Existed_Raid", 00:10:05.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.659 "strip_size_kb": 0, 00:10:05.659 "state": "configuring", 00:10:05.659 "raid_level": "raid1", 00:10:05.659 "superblock": false, 00:10:05.659 "num_base_bdevs": 4, 00:10:05.659 "num_base_bdevs_discovered": 1, 00:10:05.659 "num_base_bdevs_operational": 4, 00:10:05.659 "base_bdevs_list": [ 00:10:05.659 { 00:10:05.659 "name": "BaseBdev1", 00:10:05.659 "uuid": "6a43f45c-bfde-45b5-99b3-7e9dec7904fe", 00:10:05.659 "is_configured": true, 00:10:05.659 "data_offset": 0, 00:10:05.659 "data_size": 65536 00:10:05.659 }, 00:10:05.659 { 00:10:05.659 "name": "BaseBdev2", 00:10:05.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.659 "is_configured": false, 00:10:05.659 "data_offset": 0, 00:10:05.659 "data_size": 0 00:10:05.659 }, 00:10:05.659 { 00:10:05.659 "name": "BaseBdev3", 00:10:05.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.659 "is_configured": false, 00:10:05.659 "data_offset": 0, 00:10:05.659 "data_size": 0 00:10:05.659 }, 00:10:05.659 { 00:10:05.659 "name": "BaseBdev4", 00:10:05.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.659 "is_configured": false, 00:10:05.659 "data_offset": 0, 00:10:05.659 "data_size": 0 00:10:05.659 } 00:10:05.659 ] 00:10:05.659 }' 00:10:05.659 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.659 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.919 [2024-12-14 04:59:16.683326] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:05.919 [2024-12-14 04:59:16.683405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.919 [2024-12-14 04:59:16.691370] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.919 [2024-12-14 04:59:16.693159] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.919 [2024-12-14 04:59:16.693212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.919 [2024-12-14 04:59:16.693225] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:05.919 [2024-12-14 04:59:16.693233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:05.919 [2024-12-14 04:59:16.693239] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:05.919 [2024-12-14 04:59:16.693247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.919 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.919 "name": "Existed_Raid", 00:10:05.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.919 "strip_size_kb": 0, 00:10:05.919 "state": "configuring", 00:10:05.919 "raid_level": "raid1", 00:10:05.919 "superblock": false, 00:10:05.919 "num_base_bdevs": 4, 00:10:05.919 "num_base_bdevs_discovered": 1, 00:10:05.919 "num_base_bdevs_operational": 4, 00:10:05.919 "base_bdevs_list": [ 00:10:05.919 { 00:10:05.919 "name": "BaseBdev1", 00:10:05.920 "uuid": "6a43f45c-bfde-45b5-99b3-7e9dec7904fe", 00:10:05.920 "is_configured": true, 00:10:05.920 "data_offset": 0, 00:10:05.920 "data_size": 65536 00:10:05.920 }, 00:10:05.920 { 00:10:05.920 "name": "BaseBdev2", 00:10:05.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.920 "is_configured": false, 00:10:05.920 "data_offset": 0, 00:10:05.920 "data_size": 0 00:10:05.920 }, 00:10:05.920 { 00:10:05.920 "name": "BaseBdev3", 00:10:05.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.920 "is_configured": false, 00:10:05.920 "data_offset": 0, 00:10:05.920 "data_size": 0 00:10:05.920 }, 00:10:05.920 { 00:10:05.920 "name": "BaseBdev4", 00:10:05.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.920 "is_configured": false, 00:10:05.920 "data_offset": 0, 00:10:05.920 "data_size": 0 00:10:05.920 } 00:10:05.920 ] 00:10:05.920 }' 00:10:05.920 04:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.920 04:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.491 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:06.491 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.491 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.491 [2024-12-14 04:59:17.149810] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.491 BaseBdev2 00:10:06.491 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.491 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:06.491 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:06.491 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.491 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:06.491 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.491 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.492 [ 00:10:06.492 { 00:10:06.492 "name": "BaseBdev2", 00:10:06.492 "aliases": [ 00:10:06.492 "93c9fc7a-88a1-463c-9a3f-5b15a9c08d4b" 00:10:06.492 ], 00:10:06.492 "product_name": "Malloc disk", 00:10:06.492 "block_size": 512, 00:10:06.492 "num_blocks": 65536, 00:10:06.492 "uuid": "93c9fc7a-88a1-463c-9a3f-5b15a9c08d4b", 00:10:06.492 "assigned_rate_limits": { 00:10:06.492 "rw_ios_per_sec": 0, 00:10:06.492 "rw_mbytes_per_sec": 0, 00:10:06.492 "r_mbytes_per_sec": 0, 00:10:06.492 "w_mbytes_per_sec": 0 00:10:06.492 }, 00:10:06.492 "claimed": true, 00:10:06.492 "claim_type": "exclusive_write", 00:10:06.492 "zoned": false, 00:10:06.492 "supported_io_types": { 00:10:06.492 "read": true, 00:10:06.492 "write": true, 00:10:06.492 "unmap": true, 00:10:06.492 "flush": true, 00:10:06.492 "reset": true, 00:10:06.492 "nvme_admin": false, 00:10:06.492 "nvme_io": false, 00:10:06.492 "nvme_io_md": false, 00:10:06.492 "write_zeroes": true, 00:10:06.492 "zcopy": true, 00:10:06.492 "get_zone_info": false, 00:10:06.492 "zone_management": false, 00:10:06.492 "zone_append": false, 00:10:06.492 "compare": false, 00:10:06.492 "compare_and_write": false, 00:10:06.492 "abort": true, 00:10:06.492 "seek_hole": false, 00:10:06.492 "seek_data": false, 00:10:06.492 "copy": true, 00:10:06.492 "nvme_iov_md": false 00:10:06.492 }, 00:10:06.492 "memory_domains": [ 00:10:06.492 { 00:10:06.492 "dma_device_id": "system", 00:10:06.492 "dma_device_type": 1 00:10:06.492 }, 00:10:06.492 { 00:10:06.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.492 "dma_device_type": 2 00:10:06.492 } 00:10:06.492 ], 00:10:06.492 "driver_specific": {} 00:10:06.492 } 00:10:06.492 ] 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.492 "name": "Existed_Raid", 00:10:06.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.492 "strip_size_kb": 0, 00:10:06.492 "state": "configuring", 00:10:06.492 "raid_level": "raid1", 00:10:06.492 "superblock": false, 00:10:06.492 "num_base_bdevs": 4, 00:10:06.492 "num_base_bdevs_discovered": 2, 00:10:06.492 "num_base_bdevs_operational": 4, 00:10:06.492 "base_bdevs_list": [ 00:10:06.492 { 00:10:06.492 "name": "BaseBdev1", 00:10:06.492 "uuid": "6a43f45c-bfde-45b5-99b3-7e9dec7904fe", 00:10:06.492 "is_configured": true, 00:10:06.492 "data_offset": 0, 00:10:06.492 "data_size": 65536 00:10:06.492 }, 00:10:06.492 { 00:10:06.492 "name": "BaseBdev2", 00:10:06.492 "uuid": "93c9fc7a-88a1-463c-9a3f-5b15a9c08d4b", 00:10:06.492 "is_configured": true, 00:10:06.492 "data_offset": 0, 00:10:06.492 "data_size": 65536 00:10:06.492 }, 00:10:06.492 { 00:10:06.492 "name": "BaseBdev3", 00:10:06.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.492 "is_configured": false, 00:10:06.492 "data_offset": 0, 00:10:06.492 "data_size": 0 00:10:06.492 }, 00:10:06.492 { 00:10:06.492 "name": "BaseBdev4", 00:10:06.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.492 "is_configured": false, 00:10:06.492 "data_offset": 0, 00:10:06.492 "data_size": 0 00:10:06.492 } 00:10:06.492 ] 00:10:06.492 }' 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.492 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.760 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:06.760 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.760 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.760 [2024-12-14 04:59:17.623936] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.760 BaseBdev3 00:10:06.760 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.760 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:06.760 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:06.760 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.760 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:06.760 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.760 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.760 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.760 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.760 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.760 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.028 [ 00:10:07.028 { 00:10:07.028 "name": "BaseBdev3", 00:10:07.028 "aliases": [ 00:10:07.028 "52577780-7786-45e9-a271-ad7cb41d228e" 00:10:07.028 ], 00:10:07.028 "product_name": "Malloc disk", 00:10:07.028 "block_size": 512, 00:10:07.028 "num_blocks": 65536, 00:10:07.028 "uuid": "52577780-7786-45e9-a271-ad7cb41d228e", 00:10:07.028 "assigned_rate_limits": { 00:10:07.028 "rw_ios_per_sec": 0, 00:10:07.028 "rw_mbytes_per_sec": 0, 00:10:07.028 "r_mbytes_per_sec": 0, 00:10:07.028 "w_mbytes_per_sec": 0 00:10:07.028 }, 00:10:07.028 "claimed": true, 00:10:07.028 "claim_type": "exclusive_write", 00:10:07.028 "zoned": false, 00:10:07.028 "supported_io_types": { 00:10:07.028 "read": true, 00:10:07.028 "write": true, 00:10:07.028 "unmap": true, 00:10:07.028 "flush": true, 00:10:07.028 "reset": true, 00:10:07.028 "nvme_admin": false, 00:10:07.028 "nvme_io": false, 00:10:07.028 "nvme_io_md": false, 00:10:07.028 "write_zeroes": true, 00:10:07.028 "zcopy": true, 00:10:07.028 "get_zone_info": false, 00:10:07.028 "zone_management": false, 00:10:07.028 "zone_append": false, 00:10:07.028 "compare": false, 00:10:07.028 "compare_and_write": false, 00:10:07.028 "abort": true, 00:10:07.028 "seek_hole": false, 00:10:07.028 "seek_data": false, 00:10:07.028 "copy": true, 00:10:07.028 "nvme_iov_md": false 00:10:07.028 }, 00:10:07.028 "memory_domains": [ 00:10:07.028 { 00:10:07.028 "dma_device_id": "system", 00:10:07.028 "dma_device_type": 1 00:10:07.028 }, 00:10:07.028 { 00:10:07.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.028 "dma_device_type": 2 00:10:07.028 } 00:10:07.028 ], 00:10:07.028 "driver_specific": {} 00:10:07.028 } 00:10:07.028 ] 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.028 "name": "Existed_Raid", 00:10:07.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.028 "strip_size_kb": 0, 00:10:07.028 "state": "configuring", 00:10:07.028 "raid_level": "raid1", 00:10:07.028 "superblock": false, 00:10:07.028 "num_base_bdevs": 4, 00:10:07.028 "num_base_bdevs_discovered": 3, 00:10:07.028 "num_base_bdevs_operational": 4, 00:10:07.028 "base_bdevs_list": [ 00:10:07.028 { 00:10:07.028 "name": "BaseBdev1", 00:10:07.028 "uuid": "6a43f45c-bfde-45b5-99b3-7e9dec7904fe", 00:10:07.028 "is_configured": true, 00:10:07.028 "data_offset": 0, 00:10:07.028 "data_size": 65536 00:10:07.028 }, 00:10:07.028 { 00:10:07.028 "name": "BaseBdev2", 00:10:07.028 "uuid": "93c9fc7a-88a1-463c-9a3f-5b15a9c08d4b", 00:10:07.028 "is_configured": true, 00:10:07.028 "data_offset": 0, 00:10:07.028 "data_size": 65536 00:10:07.028 }, 00:10:07.028 { 00:10:07.028 "name": "BaseBdev3", 00:10:07.028 "uuid": "52577780-7786-45e9-a271-ad7cb41d228e", 00:10:07.028 "is_configured": true, 00:10:07.028 "data_offset": 0, 00:10:07.028 "data_size": 65536 00:10:07.028 }, 00:10:07.028 { 00:10:07.028 "name": "BaseBdev4", 00:10:07.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.028 "is_configured": false, 00:10:07.028 "data_offset": 0, 00:10:07.028 "data_size": 0 00:10:07.028 } 00:10:07.028 ] 00:10:07.028 }' 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.028 04:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.289 [2024-12-14 04:59:18.090230] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:07.289 [2024-12-14 04:59:18.090298] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:07.289 [2024-12-14 04:59:18.090307] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:07.289 [2024-12-14 04:59:18.090652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:07.289 [2024-12-14 04:59:18.090831] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:07.289 [2024-12-14 04:59:18.090856] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:07.289 [2024-12-14 04:59:18.091064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.289 BaseBdev4 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.289 [ 00:10:07.289 { 00:10:07.289 "name": "BaseBdev4", 00:10:07.289 "aliases": [ 00:10:07.289 "5d1a1d53-1bce-45b1-82ca-f545da4b9967" 00:10:07.289 ], 00:10:07.289 "product_name": "Malloc disk", 00:10:07.289 "block_size": 512, 00:10:07.289 "num_blocks": 65536, 00:10:07.289 "uuid": "5d1a1d53-1bce-45b1-82ca-f545da4b9967", 00:10:07.289 "assigned_rate_limits": { 00:10:07.289 "rw_ios_per_sec": 0, 00:10:07.289 "rw_mbytes_per_sec": 0, 00:10:07.289 "r_mbytes_per_sec": 0, 00:10:07.289 "w_mbytes_per_sec": 0 00:10:07.289 }, 00:10:07.289 "claimed": true, 00:10:07.289 "claim_type": "exclusive_write", 00:10:07.289 "zoned": false, 00:10:07.289 "supported_io_types": { 00:10:07.289 "read": true, 00:10:07.289 "write": true, 00:10:07.289 "unmap": true, 00:10:07.289 "flush": true, 00:10:07.289 "reset": true, 00:10:07.289 "nvme_admin": false, 00:10:07.289 "nvme_io": false, 00:10:07.289 "nvme_io_md": false, 00:10:07.289 "write_zeroes": true, 00:10:07.289 "zcopy": true, 00:10:07.289 "get_zone_info": false, 00:10:07.289 "zone_management": false, 00:10:07.289 "zone_append": false, 00:10:07.289 "compare": false, 00:10:07.289 "compare_and_write": false, 00:10:07.289 "abort": true, 00:10:07.289 "seek_hole": false, 00:10:07.289 "seek_data": false, 00:10:07.289 "copy": true, 00:10:07.289 "nvme_iov_md": false 00:10:07.289 }, 00:10:07.289 "memory_domains": [ 00:10:07.289 { 00:10:07.289 "dma_device_id": "system", 00:10:07.289 "dma_device_type": 1 00:10:07.289 }, 00:10:07.289 { 00:10:07.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.289 "dma_device_type": 2 00:10:07.289 } 00:10:07.289 ], 00:10:07.289 "driver_specific": {} 00:10:07.289 } 00:10:07.289 ] 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.289 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.549 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.549 "name": "Existed_Raid", 00:10:07.549 "uuid": "e1e09603-2326-4c24-a333-1cbefde60f82", 00:10:07.549 "strip_size_kb": 0, 00:10:07.549 "state": "online", 00:10:07.549 "raid_level": "raid1", 00:10:07.549 "superblock": false, 00:10:07.549 "num_base_bdevs": 4, 00:10:07.549 "num_base_bdevs_discovered": 4, 00:10:07.549 "num_base_bdevs_operational": 4, 00:10:07.549 "base_bdevs_list": [ 00:10:07.549 { 00:10:07.549 "name": "BaseBdev1", 00:10:07.549 "uuid": "6a43f45c-bfde-45b5-99b3-7e9dec7904fe", 00:10:07.549 "is_configured": true, 00:10:07.549 "data_offset": 0, 00:10:07.549 "data_size": 65536 00:10:07.549 }, 00:10:07.549 { 00:10:07.549 "name": "BaseBdev2", 00:10:07.549 "uuid": "93c9fc7a-88a1-463c-9a3f-5b15a9c08d4b", 00:10:07.549 "is_configured": true, 00:10:07.549 "data_offset": 0, 00:10:07.549 "data_size": 65536 00:10:07.549 }, 00:10:07.549 { 00:10:07.549 "name": "BaseBdev3", 00:10:07.549 "uuid": "52577780-7786-45e9-a271-ad7cb41d228e", 00:10:07.549 "is_configured": true, 00:10:07.549 "data_offset": 0, 00:10:07.549 "data_size": 65536 00:10:07.549 }, 00:10:07.549 { 00:10:07.549 "name": "BaseBdev4", 00:10:07.549 "uuid": "5d1a1d53-1bce-45b1-82ca-f545da4b9967", 00:10:07.549 "is_configured": true, 00:10:07.549 "data_offset": 0, 00:10:07.549 "data_size": 65536 00:10:07.549 } 00:10:07.549 ] 00:10:07.549 }' 00:10:07.549 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.549 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.809 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:07.809 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:07.809 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:07.809 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:07.809 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:07.809 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:07.809 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:07.809 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.809 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.809 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:07.809 [2024-12-14 04:59:18.577720] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:07.809 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.809 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:07.809 "name": "Existed_Raid", 00:10:07.809 "aliases": [ 00:10:07.809 "e1e09603-2326-4c24-a333-1cbefde60f82" 00:10:07.809 ], 00:10:07.809 "product_name": "Raid Volume", 00:10:07.809 "block_size": 512, 00:10:07.809 "num_blocks": 65536, 00:10:07.809 "uuid": "e1e09603-2326-4c24-a333-1cbefde60f82", 00:10:07.809 "assigned_rate_limits": { 00:10:07.809 "rw_ios_per_sec": 0, 00:10:07.809 "rw_mbytes_per_sec": 0, 00:10:07.809 "r_mbytes_per_sec": 0, 00:10:07.809 "w_mbytes_per_sec": 0 00:10:07.809 }, 00:10:07.809 "claimed": false, 00:10:07.809 "zoned": false, 00:10:07.809 "supported_io_types": { 00:10:07.809 "read": true, 00:10:07.809 "write": true, 00:10:07.809 "unmap": false, 00:10:07.809 "flush": false, 00:10:07.809 "reset": true, 00:10:07.809 "nvme_admin": false, 00:10:07.809 "nvme_io": false, 00:10:07.809 "nvme_io_md": false, 00:10:07.809 "write_zeroes": true, 00:10:07.809 "zcopy": false, 00:10:07.809 "get_zone_info": false, 00:10:07.809 "zone_management": false, 00:10:07.809 "zone_append": false, 00:10:07.809 "compare": false, 00:10:07.809 "compare_and_write": false, 00:10:07.809 "abort": false, 00:10:07.809 "seek_hole": false, 00:10:07.809 "seek_data": false, 00:10:07.809 "copy": false, 00:10:07.809 "nvme_iov_md": false 00:10:07.809 }, 00:10:07.809 "memory_domains": [ 00:10:07.809 { 00:10:07.809 "dma_device_id": "system", 00:10:07.809 "dma_device_type": 1 00:10:07.809 }, 00:10:07.809 { 00:10:07.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.809 "dma_device_type": 2 00:10:07.809 }, 00:10:07.809 { 00:10:07.809 "dma_device_id": "system", 00:10:07.809 "dma_device_type": 1 00:10:07.809 }, 00:10:07.809 { 00:10:07.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.809 "dma_device_type": 2 00:10:07.809 }, 00:10:07.809 { 00:10:07.809 "dma_device_id": "system", 00:10:07.809 "dma_device_type": 1 00:10:07.809 }, 00:10:07.809 { 00:10:07.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.809 "dma_device_type": 2 00:10:07.809 }, 00:10:07.809 { 00:10:07.809 "dma_device_id": "system", 00:10:07.809 "dma_device_type": 1 00:10:07.809 }, 00:10:07.809 { 00:10:07.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.809 "dma_device_type": 2 00:10:07.809 } 00:10:07.809 ], 00:10:07.809 "driver_specific": { 00:10:07.809 "raid": { 00:10:07.809 "uuid": "e1e09603-2326-4c24-a333-1cbefde60f82", 00:10:07.809 "strip_size_kb": 0, 00:10:07.809 "state": "online", 00:10:07.809 "raid_level": "raid1", 00:10:07.809 "superblock": false, 00:10:07.809 "num_base_bdevs": 4, 00:10:07.809 "num_base_bdevs_discovered": 4, 00:10:07.809 "num_base_bdevs_operational": 4, 00:10:07.809 "base_bdevs_list": [ 00:10:07.809 { 00:10:07.809 "name": "BaseBdev1", 00:10:07.809 "uuid": "6a43f45c-bfde-45b5-99b3-7e9dec7904fe", 00:10:07.809 "is_configured": true, 00:10:07.809 "data_offset": 0, 00:10:07.809 "data_size": 65536 00:10:07.809 }, 00:10:07.809 { 00:10:07.809 "name": "BaseBdev2", 00:10:07.809 "uuid": "93c9fc7a-88a1-463c-9a3f-5b15a9c08d4b", 00:10:07.809 "is_configured": true, 00:10:07.809 "data_offset": 0, 00:10:07.809 "data_size": 65536 00:10:07.809 }, 00:10:07.809 { 00:10:07.809 "name": "BaseBdev3", 00:10:07.809 "uuid": "52577780-7786-45e9-a271-ad7cb41d228e", 00:10:07.809 "is_configured": true, 00:10:07.809 "data_offset": 0, 00:10:07.809 "data_size": 65536 00:10:07.809 }, 00:10:07.809 { 00:10:07.809 "name": "BaseBdev4", 00:10:07.809 "uuid": "5d1a1d53-1bce-45b1-82ca-f545da4b9967", 00:10:07.809 "is_configured": true, 00:10:07.809 "data_offset": 0, 00:10:07.809 "data_size": 65536 00:10:07.809 } 00:10:07.809 ] 00:10:07.809 } 00:10:07.809 } 00:10:07.809 }' 00:10:07.809 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:07.809 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:07.809 BaseBdev2 00:10:07.809 BaseBdev3 00:10:07.809 BaseBdev4' 00:10:07.809 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.070 [2024-12-14 04:59:18.880996] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.070 "name": "Existed_Raid", 00:10:08.070 "uuid": "e1e09603-2326-4c24-a333-1cbefde60f82", 00:10:08.070 "strip_size_kb": 0, 00:10:08.070 "state": "online", 00:10:08.070 "raid_level": "raid1", 00:10:08.070 "superblock": false, 00:10:08.070 "num_base_bdevs": 4, 00:10:08.070 "num_base_bdevs_discovered": 3, 00:10:08.070 "num_base_bdevs_operational": 3, 00:10:08.070 "base_bdevs_list": [ 00:10:08.070 { 00:10:08.070 "name": null, 00:10:08.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.070 "is_configured": false, 00:10:08.070 "data_offset": 0, 00:10:08.070 "data_size": 65536 00:10:08.070 }, 00:10:08.070 { 00:10:08.070 "name": "BaseBdev2", 00:10:08.070 "uuid": "93c9fc7a-88a1-463c-9a3f-5b15a9c08d4b", 00:10:08.070 "is_configured": true, 00:10:08.070 "data_offset": 0, 00:10:08.070 "data_size": 65536 00:10:08.070 }, 00:10:08.070 { 00:10:08.070 "name": "BaseBdev3", 00:10:08.070 "uuid": "52577780-7786-45e9-a271-ad7cb41d228e", 00:10:08.070 "is_configured": true, 00:10:08.070 "data_offset": 0, 00:10:08.070 "data_size": 65536 00:10:08.070 }, 00:10:08.070 { 00:10:08.070 "name": "BaseBdev4", 00:10:08.070 "uuid": "5d1a1d53-1bce-45b1-82ca-f545da4b9967", 00:10:08.070 "is_configured": true, 00:10:08.070 "data_offset": 0, 00:10:08.070 "data_size": 65536 00:10:08.070 } 00:10:08.070 ] 00:10:08.070 }' 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.070 04:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.640 [2024-12-14 04:59:19.399361] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.640 [2024-12-14 04:59:19.454325] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:08.640 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.641 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:08.641 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.641 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.641 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.641 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.641 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:08.641 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:08.641 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:08.641 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.641 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.641 [2024-12-14 04:59:19.513063] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:08.641 [2024-12-14 04:59:19.513180] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.901 [2024-12-14 04:59:19.524702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.901 [2024-12-14 04:59:19.524755] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:08.901 [2024-12-14 04:59:19.524773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.901 BaseBdev2 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.901 [ 00:10:08.901 { 00:10:08.901 "name": "BaseBdev2", 00:10:08.901 "aliases": [ 00:10:08.901 "7b666ee3-b5bd-4db9-997c-0c0804b4b936" 00:10:08.901 ], 00:10:08.901 "product_name": "Malloc disk", 00:10:08.901 "block_size": 512, 00:10:08.901 "num_blocks": 65536, 00:10:08.901 "uuid": "7b666ee3-b5bd-4db9-997c-0c0804b4b936", 00:10:08.901 "assigned_rate_limits": { 00:10:08.901 "rw_ios_per_sec": 0, 00:10:08.901 "rw_mbytes_per_sec": 0, 00:10:08.901 "r_mbytes_per_sec": 0, 00:10:08.901 "w_mbytes_per_sec": 0 00:10:08.901 }, 00:10:08.901 "claimed": false, 00:10:08.901 "zoned": false, 00:10:08.901 "supported_io_types": { 00:10:08.901 "read": true, 00:10:08.901 "write": true, 00:10:08.901 "unmap": true, 00:10:08.901 "flush": true, 00:10:08.901 "reset": true, 00:10:08.901 "nvme_admin": false, 00:10:08.901 "nvme_io": false, 00:10:08.901 "nvme_io_md": false, 00:10:08.901 "write_zeroes": true, 00:10:08.901 "zcopy": true, 00:10:08.901 "get_zone_info": false, 00:10:08.901 "zone_management": false, 00:10:08.901 "zone_append": false, 00:10:08.901 "compare": false, 00:10:08.901 "compare_and_write": false, 00:10:08.901 "abort": true, 00:10:08.901 "seek_hole": false, 00:10:08.901 "seek_data": false, 00:10:08.901 "copy": true, 00:10:08.901 "nvme_iov_md": false 00:10:08.901 }, 00:10:08.901 "memory_domains": [ 00:10:08.901 { 00:10:08.901 "dma_device_id": "system", 00:10:08.901 "dma_device_type": 1 00:10:08.901 }, 00:10:08.901 { 00:10:08.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.901 "dma_device_type": 2 00:10:08.901 } 00:10:08.901 ], 00:10:08.901 "driver_specific": {} 00:10:08.901 } 00:10:08.901 ] 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.901 BaseBdev3 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.901 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.902 [ 00:10:08.902 { 00:10:08.902 "name": "BaseBdev3", 00:10:08.902 "aliases": [ 00:10:08.902 "6003b319-a880-40f8-bb08-36af741c8359" 00:10:08.902 ], 00:10:08.902 "product_name": "Malloc disk", 00:10:08.902 "block_size": 512, 00:10:08.902 "num_blocks": 65536, 00:10:08.902 "uuid": "6003b319-a880-40f8-bb08-36af741c8359", 00:10:08.902 "assigned_rate_limits": { 00:10:08.902 "rw_ios_per_sec": 0, 00:10:08.902 "rw_mbytes_per_sec": 0, 00:10:08.902 "r_mbytes_per_sec": 0, 00:10:08.902 "w_mbytes_per_sec": 0 00:10:08.902 }, 00:10:08.902 "claimed": false, 00:10:08.902 "zoned": false, 00:10:08.902 "supported_io_types": { 00:10:08.902 "read": true, 00:10:08.902 "write": true, 00:10:08.902 "unmap": true, 00:10:08.902 "flush": true, 00:10:08.902 "reset": true, 00:10:08.902 "nvme_admin": false, 00:10:08.902 "nvme_io": false, 00:10:08.902 "nvme_io_md": false, 00:10:08.902 "write_zeroes": true, 00:10:08.902 "zcopy": true, 00:10:08.902 "get_zone_info": false, 00:10:08.902 "zone_management": false, 00:10:08.902 "zone_append": false, 00:10:08.902 "compare": false, 00:10:08.902 "compare_and_write": false, 00:10:08.902 "abort": true, 00:10:08.902 "seek_hole": false, 00:10:08.902 "seek_data": false, 00:10:08.902 "copy": true, 00:10:08.902 "nvme_iov_md": false 00:10:08.902 }, 00:10:08.902 "memory_domains": [ 00:10:08.902 { 00:10:08.902 "dma_device_id": "system", 00:10:08.902 "dma_device_type": 1 00:10:08.902 }, 00:10:08.902 { 00:10:08.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.902 "dma_device_type": 2 00:10:08.902 } 00:10:08.902 ], 00:10:08.902 "driver_specific": {} 00:10:08.902 } 00:10:08.902 ] 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.902 BaseBdev4 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.902 [ 00:10:08.902 { 00:10:08.902 "name": "BaseBdev4", 00:10:08.902 "aliases": [ 00:10:08.902 "d0695186-e5d9-4a15-b2b7-68ba6716f808" 00:10:08.902 ], 00:10:08.902 "product_name": "Malloc disk", 00:10:08.902 "block_size": 512, 00:10:08.902 "num_blocks": 65536, 00:10:08.902 "uuid": "d0695186-e5d9-4a15-b2b7-68ba6716f808", 00:10:08.902 "assigned_rate_limits": { 00:10:08.902 "rw_ios_per_sec": 0, 00:10:08.902 "rw_mbytes_per_sec": 0, 00:10:08.902 "r_mbytes_per_sec": 0, 00:10:08.902 "w_mbytes_per_sec": 0 00:10:08.902 }, 00:10:08.902 "claimed": false, 00:10:08.902 "zoned": false, 00:10:08.902 "supported_io_types": { 00:10:08.902 "read": true, 00:10:08.902 "write": true, 00:10:08.902 "unmap": true, 00:10:08.902 "flush": true, 00:10:08.902 "reset": true, 00:10:08.902 "nvme_admin": false, 00:10:08.902 "nvme_io": false, 00:10:08.902 "nvme_io_md": false, 00:10:08.902 "write_zeroes": true, 00:10:08.902 "zcopy": true, 00:10:08.902 "get_zone_info": false, 00:10:08.902 "zone_management": false, 00:10:08.902 "zone_append": false, 00:10:08.902 "compare": false, 00:10:08.902 "compare_and_write": false, 00:10:08.902 "abort": true, 00:10:08.902 "seek_hole": false, 00:10:08.902 "seek_data": false, 00:10:08.902 "copy": true, 00:10:08.902 "nvme_iov_md": false 00:10:08.902 }, 00:10:08.902 "memory_domains": [ 00:10:08.902 { 00:10:08.902 "dma_device_id": "system", 00:10:08.902 "dma_device_type": 1 00:10:08.902 }, 00:10:08.902 { 00:10:08.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.902 "dma_device_type": 2 00:10:08.902 } 00:10:08.902 ], 00:10:08.902 "driver_specific": {} 00:10:08.902 } 00:10:08.902 ] 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.902 [2024-12-14 04:59:19.728264] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:08.902 [2024-12-14 04:59:19.728313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:08.902 [2024-12-14 04:59:19.728331] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.902 [2024-12-14 04:59:19.730122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.902 [2024-12-14 04:59:19.730184] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.902 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.162 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.162 "name": "Existed_Raid", 00:10:09.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.162 "strip_size_kb": 0, 00:10:09.162 "state": "configuring", 00:10:09.162 "raid_level": "raid1", 00:10:09.162 "superblock": false, 00:10:09.162 "num_base_bdevs": 4, 00:10:09.162 "num_base_bdevs_discovered": 3, 00:10:09.162 "num_base_bdevs_operational": 4, 00:10:09.162 "base_bdevs_list": [ 00:10:09.162 { 00:10:09.162 "name": "BaseBdev1", 00:10:09.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.162 "is_configured": false, 00:10:09.162 "data_offset": 0, 00:10:09.162 "data_size": 0 00:10:09.162 }, 00:10:09.162 { 00:10:09.162 "name": "BaseBdev2", 00:10:09.162 "uuid": "7b666ee3-b5bd-4db9-997c-0c0804b4b936", 00:10:09.162 "is_configured": true, 00:10:09.162 "data_offset": 0, 00:10:09.162 "data_size": 65536 00:10:09.162 }, 00:10:09.162 { 00:10:09.162 "name": "BaseBdev3", 00:10:09.162 "uuid": "6003b319-a880-40f8-bb08-36af741c8359", 00:10:09.162 "is_configured": true, 00:10:09.162 "data_offset": 0, 00:10:09.162 "data_size": 65536 00:10:09.162 }, 00:10:09.162 { 00:10:09.162 "name": "BaseBdev4", 00:10:09.162 "uuid": "d0695186-e5d9-4a15-b2b7-68ba6716f808", 00:10:09.162 "is_configured": true, 00:10:09.162 "data_offset": 0, 00:10:09.162 "data_size": 65536 00:10:09.162 } 00:10:09.162 ] 00:10:09.162 }' 00:10:09.162 04:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.162 04:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.422 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:09.422 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.422 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.422 [2024-12-14 04:59:20.195442] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:09.422 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.422 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:09.422 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.422 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.422 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.422 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.422 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.422 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.422 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.422 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.422 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.422 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.422 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.422 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.422 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.422 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.422 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.422 "name": "Existed_Raid", 00:10:09.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.422 "strip_size_kb": 0, 00:10:09.422 "state": "configuring", 00:10:09.422 "raid_level": "raid1", 00:10:09.422 "superblock": false, 00:10:09.422 "num_base_bdevs": 4, 00:10:09.422 "num_base_bdevs_discovered": 2, 00:10:09.422 "num_base_bdevs_operational": 4, 00:10:09.422 "base_bdevs_list": [ 00:10:09.422 { 00:10:09.422 "name": "BaseBdev1", 00:10:09.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.422 "is_configured": false, 00:10:09.422 "data_offset": 0, 00:10:09.422 "data_size": 0 00:10:09.422 }, 00:10:09.422 { 00:10:09.422 "name": null, 00:10:09.422 "uuid": "7b666ee3-b5bd-4db9-997c-0c0804b4b936", 00:10:09.422 "is_configured": false, 00:10:09.422 "data_offset": 0, 00:10:09.422 "data_size": 65536 00:10:09.422 }, 00:10:09.422 { 00:10:09.422 "name": "BaseBdev3", 00:10:09.422 "uuid": "6003b319-a880-40f8-bb08-36af741c8359", 00:10:09.422 "is_configured": true, 00:10:09.422 "data_offset": 0, 00:10:09.422 "data_size": 65536 00:10:09.422 }, 00:10:09.422 { 00:10:09.422 "name": "BaseBdev4", 00:10:09.422 "uuid": "d0695186-e5d9-4a15-b2b7-68ba6716f808", 00:10:09.422 "is_configured": true, 00:10:09.422 "data_offset": 0, 00:10:09.422 "data_size": 65536 00:10:09.422 } 00:10:09.422 ] 00:10:09.422 }' 00:10:09.422 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.422 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.992 [2024-12-14 04:59:20.685557] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.992 BaseBdev1 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.992 [ 00:10:09.992 { 00:10:09.992 "name": "BaseBdev1", 00:10:09.992 "aliases": [ 00:10:09.992 "9c5957f7-9021-402b-b274-a54ae6d2735a" 00:10:09.992 ], 00:10:09.992 "product_name": "Malloc disk", 00:10:09.992 "block_size": 512, 00:10:09.992 "num_blocks": 65536, 00:10:09.992 "uuid": "9c5957f7-9021-402b-b274-a54ae6d2735a", 00:10:09.992 "assigned_rate_limits": { 00:10:09.992 "rw_ios_per_sec": 0, 00:10:09.992 "rw_mbytes_per_sec": 0, 00:10:09.992 "r_mbytes_per_sec": 0, 00:10:09.992 "w_mbytes_per_sec": 0 00:10:09.992 }, 00:10:09.992 "claimed": true, 00:10:09.992 "claim_type": "exclusive_write", 00:10:09.992 "zoned": false, 00:10:09.992 "supported_io_types": { 00:10:09.992 "read": true, 00:10:09.992 "write": true, 00:10:09.992 "unmap": true, 00:10:09.992 "flush": true, 00:10:09.992 "reset": true, 00:10:09.992 "nvme_admin": false, 00:10:09.992 "nvme_io": false, 00:10:09.992 "nvme_io_md": false, 00:10:09.992 "write_zeroes": true, 00:10:09.992 "zcopy": true, 00:10:09.992 "get_zone_info": false, 00:10:09.992 "zone_management": false, 00:10:09.992 "zone_append": false, 00:10:09.992 "compare": false, 00:10:09.992 "compare_and_write": false, 00:10:09.992 "abort": true, 00:10:09.992 "seek_hole": false, 00:10:09.992 "seek_data": false, 00:10:09.992 "copy": true, 00:10:09.992 "nvme_iov_md": false 00:10:09.992 }, 00:10:09.992 "memory_domains": [ 00:10:09.992 { 00:10:09.992 "dma_device_id": "system", 00:10:09.992 "dma_device_type": 1 00:10:09.992 }, 00:10:09.992 { 00:10:09.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.992 "dma_device_type": 2 00:10:09.992 } 00:10:09.992 ], 00:10:09.992 "driver_specific": {} 00:10:09.992 } 00:10:09.992 ] 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.992 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.992 "name": "Existed_Raid", 00:10:09.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.992 "strip_size_kb": 0, 00:10:09.992 "state": "configuring", 00:10:09.992 "raid_level": "raid1", 00:10:09.992 "superblock": false, 00:10:09.992 "num_base_bdevs": 4, 00:10:09.992 "num_base_bdevs_discovered": 3, 00:10:09.992 "num_base_bdevs_operational": 4, 00:10:09.992 "base_bdevs_list": [ 00:10:09.992 { 00:10:09.992 "name": "BaseBdev1", 00:10:09.992 "uuid": "9c5957f7-9021-402b-b274-a54ae6d2735a", 00:10:09.992 "is_configured": true, 00:10:09.992 "data_offset": 0, 00:10:09.992 "data_size": 65536 00:10:09.992 }, 00:10:09.992 { 00:10:09.992 "name": null, 00:10:09.992 "uuid": "7b666ee3-b5bd-4db9-997c-0c0804b4b936", 00:10:09.992 "is_configured": false, 00:10:09.993 "data_offset": 0, 00:10:09.993 "data_size": 65536 00:10:09.993 }, 00:10:09.993 { 00:10:09.993 "name": "BaseBdev3", 00:10:09.993 "uuid": "6003b319-a880-40f8-bb08-36af741c8359", 00:10:09.993 "is_configured": true, 00:10:09.993 "data_offset": 0, 00:10:09.993 "data_size": 65536 00:10:09.993 }, 00:10:09.993 { 00:10:09.993 "name": "BaseBdev4", 00:10:09.993 "uuid": "d0695186-e5d9-4a15-b2b7-68ba6716f808", 00:10:09.993 "is_configured": true, 00:10:09.993 "data_offset": 0, 00:10:09.993 "data_size": 65536 00:10:09.993 } 00:10:09.993 ] 00:10:09.993 }' 00:10:09.993 04:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.993 04:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.562 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:10.562 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.562 04:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.562 04:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.562 04:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.562 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:10.562 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:10.562 04:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.562 04:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.562 [2024-12-14 04:59:21.188724] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:10.562 04:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.562 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:10.562 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.562 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.562 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.562 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.562 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.562 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.562 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.562 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.562 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.563 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.563 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.563 04:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.563 04:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.563 04:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.563 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.563 "name": "Existed_Raid", 00:10:10.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.563 "strip_size_kb": 0, 00:10:10.563 "state": "configuring", 00:10:10.563 "raid_level": "raid1", 00:10:10.563 "superblock": false, 00:10:10.563 "num_base_bdevs": 4, 00:10:10.563 "num_base_bdevs_discovered": 2, 00:10:10.563 "num_base_bdevs_operational": 4, 00:10:10.563 "base_bdevs_list": [ 00:10:10.563 { 00:10:10.563 "name": "BaseBdev1", 00:10:10.563 "uuid": "9c5957f7-9021-402b-b274-a54ae6d2735a", 00:10:10.563 "is_configured": true, 00:10:10.563 "data_offset": 0, 00:10:10.563 "data_size": 65536 00:10:10.563 }, 00:10:10.563 { 00:10:10.563 "name": null, 00:10:10.563 "uuid": "7b666ee3-b5bd-4db9-997c-0c0804b4b936", 00:10:10.563 "is_configured": false, 00:10:10.563 "data_offset": 0, 00:10:10.563 "data_size": 65536 00:10:10.563 }, 00:10:10.563 { 00:10:10.563 "name": null, 00:10:10.563 "uuid": "6003b319-a880-40f8-bb08-36af741c8359", 00:10:10.563 "is_configured": false, 00:10:10.563 "data_offset": 0, 00:10:10.563 "data_size": 65536 00:10:10.563 }, 00:10:10.563 { 00:10:10.563 "name": "BaseBdev4", 00:10:10.563 "uuid": "d0695186-e5d9-4a15-b2b7-68ba6716f808", 00:10:10.563 "is_configured": true, 00:10:10.563 "data_offset": 0, 00:10:10.563 "data_size": 65536 00:10:10.563 } 00:10:10.563 ] 00:10:10.563 }' 00:10:10.563 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.563 04:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.823 [2024-12-14 04:59:21.636000] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.823 "name": "Existed_Raid", 00:10:10.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.823 "strip_size_kb": 0, 00:10:10.823 "state": "configuring", 00:10:10.823 "raid_level": "raid1", 00:10:10.823 "superblock": false, 00:10:10.823 "num_base_bdevs": 4, 00:10:10.823 "num_base_bdevs_discovered": 3, 00:10:10.823 "num_base_bdevs_operational": 4, 00:10:10.823 "base_bdevs_list": [ 00:10:10.823 { 00:10:10.823 "name": "BaseBdev1", 00:10:10.823 "uuid": "9c5957f7-9021-402b-b274-a54ae6d2735a", 00:10:10.823 "is_configured": true, 00:10:10.823 "data_offset": 0, 00:10:10.823 "data_size": 65536 00:10:10.823 }, 00:10:10.823 { 00:10:10.823 "name": null, 00:10:10.823 "uuid": "7b666ee3-b5bd-4db9-997c-0c0804b4b936", 00:10:10.823 "is_configured": false, 00:10:10.823 "data_offset": 0, 00:10:10.823 "data_size": 65536 00:10:10.823 }, 00:10:10.823 { 00:10:10.823 "name": "BaseBdev3", 00:10:10.823 "uuid": "6003b319-a880-40f8-bb08-36af741c8359", 00:10:10.823 "is_configured": true, 00:10:10.823 "data_offset": 0, 00:10:10.823 "data_size": 65536 00:10:10.823 }, 00:10:10.823 { 00:10:10.823 "name": "BaseBdev4", 00:10:10.823 "uuid": "d0695186-e5d9-4a15-b2b7-68ba6716f808", 00:10:10.823 "is_configured": true, 00:10:10.823 "data_offset": 0, 00:10:10.823 "data_size": 65536 00:10:10.823 } 00:10:10.823 ] 00:10:10.823 }' 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.823 04:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.393 [2024-12-14 04:59:22.143215] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.393 "name": "Existed_Raid", 00:10:11.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.393 "strip_size_kb": 0, 00:10:11.393 "state": "configuring", 00:10:11.393 "raid_level": "raid1", 00:10:11.393 "superblock": false, 00:10:11.393 "num_base_bdevs": 4, 00:10:11.393 "num_base_bdevs_discovered": 2, 00:10:11.393 "num_base_bdevs_operational": 4, 00:10:11.393 "base_bdevs_list": [ 00:10:11.393 { 00:10:11.393 "name": null, 00:10:11.393 "uuid": "9c5957f7-9021-402b-b274-a54ae6d2735a", 00:10:11.393 "is_configured": false, 00:10:11.393 "data_offset": 0, 00:10:11.393 "data_size": 65536 00:10:11.393 }, 00:10:11.393 { 00:10:11.393 "name": null, 00:10:11.393 "uuid": "7b666ee3-b5bd-4db9-997c-0c0804b4b936", 00:10:11.393 "is_configured": false, 00:10:11.393 "data_offset": 0, 00:10:11.393 "data_size": 65536 00:10:11.393 }, 00:10:11.393 { 00:10:11.393 "name": "BaseBdev3", 00:10:11.393 "uuid": "6003b319-a880-40f8-bb08-36af741c8359", 00:10:11.393 "is_configured": true, 00:10:11.393 "data_offset": 0, 00:10:11.393 "data_size": 65536 00:10:11.393 }, 00:10:11.393 { 00:10:11.393 "name": "BaseBdev4", 00:10:11.393 "uuid": "d0695186-e5d9-4a15-b2b7-68ba6716f808", 00:10:11.393 "is_configured": true, 00:10:11.393 "data_offset": 0, 00:10:11.393 "data_size": 65536 00:10:11.393 } 00:10:11.393 ] 00:10:11.393 }' 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.393 04:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.963 [2024-12-14 04:59:22.620859] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.963 "name": "Existed_Raid", 00:10:11.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.963 "strip_size_kb": 0, 00:10:11.963 "state": "configuring", 00:10:11.963 "raid_level": "raid1", 00:10:11.963 "superblock": false, 00:10:11.963 "num_base_bdevs": 4, 00:10:11.963 "num_base_bdevs_discovered": 3, 00:10:11.963 "num_base_bdevs_operational": 4, 00:10:11.963 "base_bdevs_list": [ 00:10:11.963 { 00:10:11.963 "name": null, 00:10:11.963 "uuid": "9c5957f7-9021-402b-b274-a54ae6d2735a", 00:10:11.963 "is_configured": false, 00:10:11.963 "data_offset": 0, 00:10:11.963 "data_size": 65536 00:10:11.963 }, 00:10:11.963 { 00:10:11.963 "name": "BaseBdev2", 00:10:11.963 "uuid": "7b666ee3-b5bd-4db9-997c-0c0804b4b936", 00:10:11.963 "is_configured": true, 00:10:11.963 "data_offset": 0, 00:10:11.963 "data_size": 65536 00:10:11.963 }, 00:10:11.963 { 00:10:11.963 "name": "BaseBdev3", 00:10:11.963 "uuid": "6003b319-a880-40f8-bb08-36af741c8359", 00:10:11.963 "is_configured": true, 00:10:11.963 "data_offset": 0, 00:10:11.963 "data_size": 65536 00:10:11.963 }, 00:10:11.963 { 00:10:11.963 "name": "BaseBdev4", 00:10:11.963 "uuid": "d0695186-e5d9-4a15-b2b7-68ba6716f808", 00:10:11.963 "is_configured": true, 00:10:11.963 "data_offset": 0, 00:10:11.963 "data_size": 65536 00:10:11.963 } 00:10:11.963 ] 00:10:11.963 }' 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.963 04:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.223 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.223 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:12.223 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.223 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.223 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9c5957f7-9021-402b-b274-a54ae6d2735a 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.483 [2024-12-14 04:59:23.174815] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:12.483 [2024-12-14 04:59:23.174867] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:12.483 [2024-12-14 04:59:23.174879] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:12.483 [2024-12-14 04:59:23.175133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:12.483 [2024-12-14 04:59:23.175326] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:12.483 [2024-12-14 04:59:23.175349] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:12.483 [2024-12-14 04:59:23.175540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.483 NewBaseBdev 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.483 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.483 [ 00:10:12.483 { 00:10:12.483 "name": "NewBaseBdev", 00:10:12.483 "aliases": [ 00:10:12.483 "9c5957f7-9021-402b-b274-a54ae6d2735a" 00:10:12.483 ], 00:10:12.483 "product_name": "Malloc disk", 00:10:12.483 "block_size": 512, 00:10:12.483 "num_blocks": 65536, 00:10:12.483 "uuid": "9c5957f7-9021-402b-b274-a54ae6d2735a", 00:10:12.483 "assigned_rate_limits": { 00:10:12.483 "rw_ios_per_sec": 0, 00:10:12.483 "rw_mbytes_per_sec": 0, 00:10:12.483 "r_mbytes_per_sec": 0, 00:10:12.483 "w_mbytes_per_sec": 0 00:10:12.483 }, 00:10:12.483 "claimed": true, 00:10:12.483 "claim_type": "exclusive_write", 00:10:12.483 "zoned": false, 00:10:12.483 "supported_io_types": { 00:10:12.483 "read": true, 00:10:12.483 "write": true, 00:10:12.483 "unmap": true, 00:10:12.483 "flush": true, 00:10:12.483 "reset": true, 00:10:12.483 "nvme_admin": false, 00:10:12.483 "nvme_io": false, 00:10:12.483 "nvme_io_md": false, 00:10:12.483 "write_zeroes": true, 00:10:12.483 "zcopy": true, 00:10:12.483 "get_zone_info": false, 00:10:12.483 "zone_management": false, 00:10:12.483 "zone_append": false, 00:10:12.483 "compare": false, 00:10:12.483 "compare_and_write": false, 00:10:12.483 "abort": true, 00:10:12.483 "seek_hole": false, 00:10:12.484 "seek_data": false, 00:10:12.484 "copy": true, 00:10:12.484 "nvme_iov_md": false 00:10:12.484 }, 00:10:12.484 "memory_domains": [ 00:10:12.484 { 00:10:12.484 "dma_device_id": "system", 00:10:12.484 "dma_device_type": 1 00:10:12.484 }, 00:10:12.484 { 00:10:12.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.484 "dma_device_type": 2 00:10:12.484 } 00:10:12.484 ], 00:10:12.484 "driver_specific": {} 00:10:12.484 } 00:10:12.484 ] 00:10:12.484 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.484 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:12.484 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:12.484 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.484 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.484 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.484 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.484 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.484 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.484 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.484 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.484 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.484 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.484 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.484 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.484 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.484 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.484 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.484 "name": "Existed_Raid", 00:10:12.484 "uuid": "50ccdf11-5aa0-46fe-bbc5-ad6e74730abf", 00:10:12.484 "strip_size_kb": 0, 00:10:12.484 "state": "online", 00:10:12.484 "raid_level": "raid1", 00:10:12.484 "superblock": false, 00:10:12.484 "num_base_bdevs": 4, 00:10:12.484 "num_base_bdevs_discovered": 4, 00:10:12.484 "num_base_bdevs_operational": 4, 00:10:12.484 "base_bdevs_list": [ 00:10:12.484 { 00:10:12.484 "name": "NewBaseBdev", 00:10:12.484 "uuid": "9c5957f7-9021-402b-b274-a54ae6d2735a", 00:10:12.484 "is_configured": true, 00:10:12.484 "data_offset": 0, 00:10:12.484 "data_size": 65536 00:10:12.484 }, 00:10:12.484 { 00:10:12.484 "name": "BaseBdev2", 00:10:12.484 "uuid": "7b666ee3-b5bd-4db9-997c-0c0804b4b936", 00:10:12.484 "is_configured": true, 00:10:12.484 "data_offset": 0, 00:10:12.484 "data_size": 65536 00:10:12.484 }, 00:10:12.484 { 00:10:12.484 "name": "BaseBdev3", 00:10:12.484 "uuid": "6003b319-a880-40f8-bb08-36af741c8359", 00:10:12.484 "is_configured": true, 00:10:12.484 "data_offset": 0, 00:10:12.484 "data_size": 65536 00:10:12.484 }, 00:10:12.484 { 00:10:12.484 "name": "BaseBdev4", 00:10:12.484 "uuid": "d0695186-e5d9-4a15-b2b7-68ba6716f808", 00:10:12.484 "is_configured": true, 00:10:12.484 "data_offset": 0, 00:10:12.484 "data_size": 65536 00:10:12.484 } 00:10:12.484 ] 00:10:12.484 }' 00:10:12.484 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.484 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.743 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:12.743 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:12.743 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:12.743 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:12.743 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:12.743 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:12.743 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:12.743 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.743 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.743 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:12.743 [2024-12-14 04:59:23.614414] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.003 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.003 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.003 "name": "Existed_Raid", 00:10:13.003 "aliases": [ 00:10:13.003 "50ccdf11-5aa0-46fe-bbc5-ad6e74730abf" 00:10:13.003 ], 00:10:13.003 "product_name": "Raid Volume", 00:10:13.003 "block_size": 512, 00:10:13.003 "num_blocks": 65536, 00:10:13.003 "uuid": "50ccdf11-5aa0-46fe-bbc5-ad6e74730abf", 00:10:13.003 "assigned_rate_limits": { 00:10:13.003 "rw_ios_per_sec": 0, 00:10:13.003 "rw_mbytes_per_sec": 0, 00:10:13.003 "r_mbytes_per_sec": 0, 00:10:13.003 "w_mbytes_per_sec": 0 00:10:13.003 }, 00:10:13.003 "claimed": false, 00:10:13.003 "zoned": false, 00:10:13.003 "supported_io_types": { 00:10:13.003 "read": true, 00:10:13.003 "write": true, 00:10:13.003 "unmap": false, 00:10:13.003 "flush": false, 00:10:13.003 "reset": true, 00:10:13.003 "nvme_admin": false, 00:10:13.003 "nvme_io": false, 00:10:13.003 "nvme_io_md": false, 00:10:13.003 "write_zeroes": true, 00:10:13.003 "zcopy": false, 00:10:13.003 "get_zone_info": false, 00:10:13.003 "zone_management": false, 00:10:13.003 "zone_append": false, 00:10:13.003 "compare": false, 00:10:13.003 "compare_and_write": false, 00:10:13.003 "abort": false, 00:10:13.003 "seek_hole": false, 00:10:13.003 "seek_data": false, 00:10:13.003 "copy": false, 00:10:13.003 "nvme_iov_md": false 00:10:13.003 }, 00:10:13.003 "memory_domains": [ 00:10:13.003 { 00:10:13.003 "dma_device_id": "system", 00:10:13.003 "dma_device_type": 1 00:10:13.003 }, 00:10:13.003 { 00:10:13.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.003 "dma_device_type": 2 00:10:13.003 }, 00:10:13.003 { 00:10:13.003 "dma_device_id": "system", 00:10:13.003 "dma_device_type": 1 00:10:13.003 }, 00:10:13.003 { 00:10:13.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.003 "dma_device_type": 2 00:10:13.003 }, 00:10:13.003 { 00:10:13.003 "dma_device_id": "system", 00:10:13.003 "dma_device_type": 1 00:10:13.003 }, 00:10:13.003 { 00:10:13.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.003 "dma_device_type": 2 00:10:13.003 }, 00:10:13.003 { 00:10:13.003 "dma_device_id": "system", 00:10:13.003 "dma_device_type": 1 00:10:13.003 }, 00:10:13.003 { 00:10:13.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.003 "dma_device_type": 2 00:10:13.003 } 00:10:13.003 ], 00:10:13.003 "driver_specific": { 00:10:13.003 "raid": { 00:10:13.003 "uuid": "50ccdf11-5aa0-46fe-bbc5-ad6e74730abf", 00:10:13.003 "strip_size_kb": 0, 00:10:13.003 "state": "online", 00:10:13.003 "raid_level": "raid1", 00:10:13.003 "superblock": false, 00:10:13.003 "num_base_bdevs": 4, 00:10:13.003 "num_base_bdevs_discovered": 4, 00:10:13.003 "num_base_bdevs_operational": 4, 00:10:13.003 "base_bdevs_list": [ 00:10:13.003 { 00:10:13.003 "name": "NewBaseBdev", 00:10:13.003 "uuid": "9c5957f7-9021-402b-b274-a54ae6d2735a", 00:10:13.004 "is_configured": true, 00:10:13.004 "data_offset": 0, 00:10:13.004 "data_size": 65536 00:10:13.004 }, 00:10:13.004 { 00:10:13.004 "name": "BaseBdev2", 00:10:13.004 "uuid": "7b666ee3-b5bd-4db9-997c-0c0804b4b936", 00:10:13.004 "is_configured": true, 00:10:13.004 "data_offset": 0, 00:10:13.004 "data_size": 65536 00:10:13.004 }, 00:10:13.004 { 00:10:13.004 "name": "BaseBdev3", 00:10:13.004 "uuid": "6003b319-a880-40f8-bb08-36af741c8359", 00:10:13.004 "is_configured": true, 00:10:13.004 "data_offset": 0, 00:10:13.004 "data_size": 65536 00:10:13.004 }, 00:10:13.004 { 00:10:13.004 "name": "BaseBdev4", 00:10:13.004 "uuid": "d0695186-e5d9-4a15-b2b7-68ba6716f808", 00:10:13.004 "is_configured": true, 00:10:13.004 "data_offset": 0, 00:10:13.004 "data_size": 65536 00:10:13.004 } 00:10:13.004 ] 00:10:13.004 } 00:10:13.004 } 00:10:13.004 }' 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:13.004 BaseBdev2 00:10:13.004 BaseBdev3 00:10:13.004 BaseBdev4' 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.004 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.264 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.264 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.264 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.264 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.264 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:13.264 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.264 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.264 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.264 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.264 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.264 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.264 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.264 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.264 [2024-12-14 04:59:23.945519] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.264 [2024-12-14 04:59:23.945548] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.264 [2024-12-14 04:59:23.945622] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.264 [2024-12-14 04:59:23.945900] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.264 [2024-12-14 04:59:23.945950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:13.264 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.265 04:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83986 00:10:13.265 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 83986 ']' 00:10:13.265 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 83986 00:10:13.265 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:13.265 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:13.265 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83986 00:10:13.265 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:13.265 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:13.265 killing process with pid 83986 00:10:13.265 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83986' 00:10:13.265 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 83986 00:10:13.265 [2024-12-14 04:59:23.990075] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:13.265 04:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 83986 00:10:13.265 [2024-12-14 04:59:24.029175] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:13.525 00:10:13.525 real 0m9.419s 00:10:13.525 user 0m16.196s 00:10:13.525 sys 0m1.901s 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.525 ************************************ 00:10:13.525 END TEST raid_state_function_test 00:10:13.525 ************************************ 00:10:13.525 04:59:24 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:10:13.525 04:59:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:13.525 04:59:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.525 04:59:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:13.525 ************************************ 00:10:13.525 START TEST raid_state_function_test_sb 00:10:13.525 ************************************ 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84630 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:13.525 Process raid pid: 84630 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84630' 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84630 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84630 ']' 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.525 04:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.785 [2024-12-14 04:59:24.437168] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:13.785 [2024-12-14 04:59:24.437770] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.785 [2024-12-14 04:59:24.598895] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.785 [2024-12-14 04:59:24.644216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.045 [2024-12-14 04:59:24.686180] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.045 [2024-12-14 04:59:24.686222] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.614 [2024-12-14 04:59:25.267848] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:14.614 [2024-12-14 04:59:25.267904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:14.614 [2024-12-14 04:59:25.267917] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.614 [2024-12-14 04:59:25.267926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.614 [2024-12-14 04:59:25.267936] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:14.614 [2024-12-14 04:59:25.267946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:14.614 [2024-12-14 04:59:25.267952] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:14.614 [2024-12-14 04:59:25.267960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.614 "name": "Existed_Raid", 00:10:14.614 "uuid": "0a1f20b0-7e95-4c5e-ad27-3d5be2ca7ecd", 00:10:14.614 "strip_size_kb": 0, 00:10:14.614 "state": "configuring", 00:10:14.614 "raid_level": "raid1", 00:10:14.614 "superblock": true, 00:10:14.614 "num_base_bdevs": 4, 00:10:14.614 "num_base_bdevs_discovered": 0, 00:10:14.614 "num_base_bdevs_operational": 4, 00:10:14.614 "base_bdevs_list": [ 00:10:14.614 { 00:10:14.614 "name": "BaseBdev1", 00:10:14.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.614 "is_configured": false, 00:10:14.614 "data_offset": 0, 00:10:14.614 "data_size": 0 00:10:14.614 }, 00:10:14.614 { 00:10:14.614 "name": "BaseBdev2", 00:10:14.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.614 "is_configured": false, 00:10:14.614 "data_offset": 0, 00:10:14.614 "data_size": 0 00:10:14.614 }, 00:10:14.614 { 00:10:14.614 "name": "BaseBdev3", 00:10:14.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.614 "is_configured": false, 00:10:14.614 "data_offset": 0, 00:10:14.614 "data_size": 0 00:10:14.614 }, 00:10:14.614 { 00:10:14.614 "name": "BaseBdev4", 00:10:14.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.614 "is_configured": false, 00:10:14.614 "data_offset": 0, 00:10:14.614 "data_size": 0 00:10:14.614 } 00:10:14.614 ] 00:10:14.614 }' 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.614 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.873 [2024-12-14 04:59:25.667104] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.873 [2024-12-14 04:59:25.667153] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.873 [2024-12-14 04:59:25.679117] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:14.873 [2024-12-14 04:59:25.679167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:14.873 [2024-12-14 04:59:25.679192] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.873 [2024-12-14 04:59:25.679207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.873 [2024-12-14 04:59:25.679213] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:14.873 [2024-12-14 04:59:25.679221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:14.873 [2024-12-14 04:59:25.679227] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:14.873 [2024-12-14 04:59:25.679235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.873 [2024-12-14 04:59:25.699934] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:14.873 BaseBdev1 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.873 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.873 [ 00:10:14.873 { 00:10:14.873 "name": "BaseBdev1", 00:10:14.873 "aliases": [ 00:10:14.873 "1aa5cec1-1da8-40e8-9455-f735583909ca" 00:10:14.873 ], 00:10:14.874 "product_name": "Malloc disk", 00:10:14.874 "block_size": 512, 00:10:14.874 "num_blocks": 65536, 00:10:14.874 "uuid": "1aa5cec1-1da8-40e8-9455-f735583909ca", 00:10:14.874 "assigned_rate_limits": { 00:10:14.874 "rw_ios_per_sec": 0, 00:10:14.874 "rw_mbytes_per_sec": 0, 00:10:14.874 "r_mbytes_per_sec": 0, 00:10:14.874 "w_mbytes_per_sec": 0 00:10:14.874 }, 00:10:14.874 "claimed": true, 00:10:14.874 "claim_type": "exclusive_write", 00:10:14.874 "zoned": false, 00:10:14.874 "supported_io_types": { 00:10:14.874 "read": true, 00:10:14.874 "write": true, 00:10:14.874 "unmap": true, 00:10:14.874 "flush": true, 00:10:14.874 "reset": true, 00:10:14.874 "nvme_admin": false, 00:10:14.874 "nvme_io": false, 00:10:14.874 "nvme_io_md": false, 00:10:14.874 "write_zeroes": true, 00:10:14.874 "zcopy": true, 00:10:14.874 "get_zone_info": false, 00:10:14.874 "zone_management": false, 00:10:14.874 "zone_append": false, 00:10:14.874 "compare": false, 00:10:14.874 "compare_and_write": false, 00:10:14.874 "abort": true, 00:10:14.874 "seek_hole": false, 00:10:14.874 "seek_data": false, 00:10:14.874 "copy": true, 00:10:14.874 "nvme_iov_md": false 00:10:14.874 }, 00:10:14.874 "memory_domains": [ 00:10:14.874 { 00:10:14.874 "dma_device_id": "system", 00:10:14.874 "dma_device_type": 1 00:10:14.874 }, 00:10:14.874 { 00:10:14.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.874 "dma_device_type": 2 00:10:14.874 } 00:10:14.874 ], 00:10:14.874 "driver_specific": {} 00:10:14.874 } 00:10:14.874 ] 00:10:14.874 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.874 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:14.874 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:14.874 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.874 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.874 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.874 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.874 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.874 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.874 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.874 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.874 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.874 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.874 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.874 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.874 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.133 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.133 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.133 "name": "Existed_Raid", 00:10:15.133 "uuid": "5be6b6f1-6819-4bf8-ad5b-cb7b7742a96a", 00:10:15.133 "strip_size_kb": 0, 00:10:15.133 "state": "configuring", 00:10:15.133 "raid_level": "raid1", 00:10:15.133 "superblock": true, 00:10:15.133 "num_base_bdevs": 4, 00:10:15.133 "num_base_bdevs_discovered": 1, 00:10:15.133 "num_base_bdevs_operational": 4, 00:10:15.133 "base_bdevs_list": [ 00:10:15.133 { 00:10:15.133 "name": "BaseBdev1", 00:10:15.133 "uuid": "1aa5cec1-1da8-40e8-9455-f735583909ca", 00:10:15.133 "is_configured": true, 00:10:15.133 "data_offset": 2048, 00:10:15.133 "data_size": 63488 00:10:15.133 }, 00:10:15.133 { 00:10:15.133 "name": "BaseBdev2", 00:10:15.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.134 "is_configured": false, 00:10:15.134 "data_offset": 0, 00:10:15.134 "data_size": 0 00:10:15.134 }, 00:10:15.134 { 00:10:15.134 "name": "BaseBdev3", 00:10:15.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.134 "is_configured": false, 00:10:15.134 "data_offset": 0, 00:10:15.134 "data_size": 0 00:10:15.134 }, 00:10:15.134 { 00:10:15.134 "name": "BaseBdev4", 00:10:15.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.134 "is_configured": false, 00:10:15.134 "data_offset": 0, 00:10:15.134 "data_size": 0 00:10:15.134 } 00:10:15.134 ] 00:10:15.134 }' 00:10:15.134 04:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.134 04:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.394 [2024-12-14 04:59:26.131232] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:15.394 [2024-12-14 04:59:26.131280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.394 [2024-12-14 04:59:26.143285] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.394 [2024-12-14 04:59:26.145227] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:15.394 [2024-12-14 04:59:26.145272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:15.394 [2024-12-14 04:59:26.145281] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:15.394 [2024-12-14 04:59:26.145290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:15.394 [2024-12-14 04:59:26.145296] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:15.394 [2024-12-14 04:59:26.145305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.394 "name": "Existed_Raid", 00:10:15.394 "uuid": "bfa51895-7aaf-4ffa-be55-1ab2efb5d175", 00:10:15.394 "strip_size_kb": 0, 00:10:15.394 "state": "configuring", 00:10:15.394 "raid_level": "raid1", 00:10:15.394 "superblock": true, 00:10:15.394 "num_base_bdevs": 4, 00:10:15.394 "num_base_bdevs_discovered": 1, 00:10:15.394 "num_base_bdevs_operational": 4, 00:10:15.394 "base_bdevs_list": [ 00:10:15.394 { 00:10:15.394 "name": "BaseBdev1", 00:10:15.394 "uuid": "1aa5cec1-1da8-40e8-9455-f735583909ca", 00:10:15.394 "is_configured": true, 00:10:15.394 "data_offset": 2048, 00:10:15.394 "data_size": 63488 00:10:15.394 }, 00:10:15.394 { 00:10:15.394 "name": "BaseBdev2", 00:10:15.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.394 "is_configured": false, 00:10:15.394 "data_offset": 0, 00:10:15.394 "data_size": 0 00:10:15.394 }, 00:10:15.394 { 00:10:15.394 "name": "BaseBdev3", 00:10:15.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.394 "is_configured": false, 00:10:15.394 "data_offset": 0, 00:10:15.394 "data_size": 0 00:10:15.394 }, 00:10:15.394 { 00:10:15.394 "name": "BaseBdev4", 00:10:15.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.394 "is_configured": false, 00:10:15.394 "data_offset": 0, 00:10:15.394 "data_size": 0 00:10:15.394 } 00:10:15.394 ] 00:10:15.394 }' 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.394 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.963 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:15.963 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.963 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.963 [2024-12-14 04:59:26.634410] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.963 BaseBdev2 00:10:15.963 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.963 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:15.963 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:15.963 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:15.963 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:15.963 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:15.963 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.964 [ 00:10:15.964 { 00:10:15.964 "name": "BaseBdev2", 00:10:15.964 "aliases": [ 00:10:15.964 "ce181cd3-c768-446d-ba54-19ae1f852b70" 00:10:15.964 ], 00:10:15.964 "product_name": "Malloc disk", 00:10:15.964 "block_size": 512, 00:10:15.964 "num_blocks": 65536, 00:10:15.964 "uuid": "ce181cd3-c768-446d-ba54-19ae1f852b70", 00:10:15.964 "assigned_rate_limits": { 00:10:15.964 "rw_ios_per_sec": 0, 00:10:15.964 "rw_mbytes_per_sec": 0, 00:10:15.964 "r_mbytes_per_sec": 0, 00:10:15.964 "w_mbytes_per_sec": 0 00:10:15.964 }, 00:10:15.964 "claimed": true, 00:10:15.964 "claim_type": "exclusive_write", 00:10:15.964 "zoned": false, 00:10:15.964 "supported_io_types": { 00:10:15.964 "read": true, 00:10:15.964 "write": true, 00:10:15.964 "unmap": true, 00:10:15.964 "flush": true, 00:10:15.964 "reset": true, 00:10:15.964 "nvme_admin": false, 00:10:15.964 "nvme_io": false, 00:10:15.964 "nvme_io_md": false, 00:10:15.964 "write_zeroes": true, 00:10:15.964 "zcopy": true, 00:10:15.964 "get_zone_info": false, 00:10:15.964 "zone_management": false, 00:10:15.964 "zone_append": false, 00:10:15.964 "compare": false, 00:10:15.964 "compare_and_write": false, 00:10:15.964 "abort": true, 00:10:15.964 "seek_hole": false, 00:10:15.964 "seek_data": false, 00:10:15.964 "copy": true, 00:10:15.964 "nvme_iov_md": false 00:10:15.964 }, 00:10:15.964 "memory_domains": [ 00:10:15.964 { 00:10:15.964 "dma_device_id": "system", 00:10:15.964 "dma_device_type": 1 00:10:15.964 }, 00:10:15.964 { 00:10:15.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.964 "dma_device_type": 2 00:10:15.964 } 00:10:15.964 ], 00:10:15.964 "driver_specific": {} 00:10:15.964 } 00:10:15.964 ] 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.964 "name": "Existed_Raid", 00:10:15.964 "uuid": "bfa51895-7aaf-4ffa-be55-1ab2efb5d175", 00:10:15.964 "strip_size_kb": 0, 00:10:15.964 "state": "configuring", 00:10:15.964 "raid_level": "raid1", 00:10:15.964 "superblock": true, 00:10:15.964 "num_base_bdevs": 4, 00:10:15.964 "num_base_bdevs_discovered": 2, 00:10:15.964 "num_base_bdevs_operational": 4, 00:10:15.964 "base_bdevs_list": [ 00:10:15.964 { 00:10:15.964 "name": "BaseBdev1", 00:10:15.964 "uuid": "1aa5cec1-1da8-40e8-9455-f735583909ca", 00:10:15.964 "is_configured": true, 00:10:15.964 "data_offset": 2048, 00:10:15.964 "data_size": 63488 00:10:15.964 }, 00:10:15.964 { 00:10:15.964 "name": "BaseBdev2", 00:10:15.964 "uuid": "ce181cd3-c768-446d-ba54-19ae1f852b70", 00:10:15.964 "is_configured": true, 00:10:15.964 "data_offset": 2048, 00:10:15.964 "data_size": 63488 00:10:15.964 }, 00:10:15.964 { 00:10:15.964 "name": "BaseBdev3", 00:10:15.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.964 "is_configured": false, 00:10:15.964 "data_offset": 0, 00:10:15.964 "data_size": 0 00:10:15.964 }, 00:10:15.964 { 00:10:15.964 "name": "BaseBdev4", 00:10:15.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.964 "is_configured": false, 00:10:15.964 "data_offset": 0, 00:10:15.964 "data_size": 0 00:10:15.964 } 00:10:15.964 ] 00:10:15.964 }' 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.964 04:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.534 [2024-12-14 04:59:27.124608] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.534 BaseBdev3 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.534 [ 00:10:16.534 { 00:10:16.534 "name": "BaseBdev3", 00:10:16.534 "aliases": [ 00:10:16.534 "ab07f2a6-6ec9-47a5-823e-a856a013cfe9" 00:10:16.534 ], 00:10:16.534 "product_name": "Malloc disk", 00:10:16.534 "block_size": 512, 00:10:16.534 "num_blocks": 65536, 00:10:16.534 "uuid": "ab07f2a6-6ec9-47a5-823e-a856a013cfe9", 00:10:16.534 "assigned_rate_limits": { 00:10:16.534 "rw_ios_per_sec": 0, 00:10:16.534 "rw_mbytes_per_sec": 0, 00:10:16.534 "r_mbytes_per_sec": 0, 00:10:16.534 "w_mbytes_per_sec": 0 00:10:16.534 }, 00:10:16.534 "claimed": true, 00:10:16.534 "claim_type": "exclusive_write", 00:10:16.534 "zoned": false, 00:10:16.534 "supported_io_types": { 00:10:16.534 "read": true, 00:10:16.534 "write": true, 00:10:16.534 "unmap": true, 00:10:16.534 "flush": true, 00:10:16.534 "reset": true, 00:10:16.534 "nvme_admin": false, 00:10:16.534 "nvme_io": false, 00:10:16.534 "nvme_io_md": false, 00:10:16.534 "write_zeroes": true, 00:10:16.534 "zcopy": true, 00:10:16.534 "get_zone_info": false, 00:10:16.534 "zone_management": false, 00:10:16.534 "zone_append": false, 00:10:16.534 "compare": false, 00:10:16.534 "compare_and_write": false, 00:10:16.534 "abort": true, 00:10:16.534 "seek_hole": false, 00:10:16.534 "seek_data": false, 00:10:16.534 "copy": true, 00:10:16.534 "nvme_iov_md": false 00:10:16.534 }, 00:10:16.534 "memory_domains": [ 00:10:16.534 { 00:10:16.534 "dma_device_id": "system", 00:10:16.534 "dma_device_type": 1 00:10:16.534 }, 00:10:16.534 { 00:10:16.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.534 "dma_device_type": 2 00:10:16.534 } 00:10:16.534 ], 00:10:16.534 "driver_specific": {} 00:10:16.534 } 00:10:16.534 ] 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.534 "name": "Existed_Raid", 00:10:16.534 "uuid": "bfa51895-7aaf-4ffa-be55-1ab2efb5d175", 00:10:16.534 "strip_size_kb": 0, 00:10:16.534 "state": "configuring", 00:10:16.534 "raid_level": "raid1", 00:10:16.534 "superblock": true, 00:10:16.534 "num_base_bdevs": 4, 00:10:16.534 "num_base_bdevs_discovered": 3, 00:10:16.534 "num_base_bdevs_operational": 4, 00:10:16.534 "base_bdevs_list": [ 00:10:16.534 { 00:10:16.534 "name": "BaseBdev1", 00:10:16.534 "uuid": "1aa5cec1-1da8-40e8-9455-f735583909ca", 00:10:16.534 "is_configured": true, 00:10:16.534 "data_offset": 2048, 00:10:16.534 "data_size": 63488 00:10:16.534 }, 00:10:16.534 { 00:10:16.534 "name": "BaseBdev2", 00:10:16.534 "uuid": "ce181cd3-c768-446d-ba54-19ae1f852b70", 00:10:16.534 "is_configured": true, 00:10:16.534 "data_offset": 2048, 00:10:16.534 "data_size": 63488 00:10:16.534 }, 00:10:16.534 { 00:10:16.534 "name": "BaseBdev3", 00:10:16.534 "uuid": "ab07f2a6-6ec9-47a5-823e-a856a013cfe9", 00:10:16.534 "is_configured": true, 00:10:16.534 "data_offset": 2048, 00:10:16.534 "data_size": 63488 00:10:16.534 }, 00:10:16.534 { 00:10:16.534 "name": "BaseBdev4", 00:10:16.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.534 "is_configured": false, 00:10:16.534 "data_offset": 0, 00:10:16.534 "data_size": 0 00:10:16.534 } 00:10:16.534 ] 00:10:16.534 }' 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.534 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.794 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:16.794 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.794 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.794 [2024-12-14 04:59:27.610738] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:16.794 [2024-12-14 04:59:27.610937] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:16.794 [2024-12-14 04:59:27.610953] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:16.794 [2024-12-14 04:59:27.611267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:16.794 [2024-12-14 04:59:27.611425] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:16.794 [2024-12-14 04:59:27.611440] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:16.794 BaseBdev4 00:10:16.794 [2024-12-14 04:59:27.611590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.794 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.794 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:16.794 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:16.794 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.794 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:16.794 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.794 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.794 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.795 [ 00:10:16.795 { 00:10:16.795 "name": "BaseBdev4", 00:10:16.795 "aliases": [ 00:10:16.795 "0435a78b-bd47-4a53-9bff-f5896a8a095a" 00:10:16.795 ], 00:10:16.795 "product_name": "Malloc disk", 00:10:16.795 "block_size": 512, 00:10:16.795 "num_blocks": 65536, 00:10:16.795 "uuid": "0435a78b-bd47-4a53-9bff-f5896a8a095a", 00:10:16.795 "assigned_rate_limits": { 00:10:16.795 "rw_ios_per_sec": 0, 00:10:16.795 "rw_mbytes_per_sec": 0, 00:10:16.795 "r_mbytes_per_sec": 0, 00:10:16.795 "w_mbytes_per_sec": 0 00:10:16.795 }, 00:10:16.795 "claimed": true, 00:10:16.795 "claim_type": "exclusive_write", 00:10:16.795 "zoned": false, 00:10:16.795 "supported_io_types": { 00:10:16.795 "read": true, 00:10:16.795 "write": true, 00:10:16.795 "unmap": true, 00:10:16.795 "flush": true, 00:10:16.795 "reset": true, 00:10:16.795 "nvme_admin": false, 00:10:16.795 "nvme_io": false, 00:10:16.795 "nvme_io_md": false, 00:10:16.795 "write_zeroes": true, 00:10:16.795 "zcopy": true, 00:10:16.795 "get_zone_info": false, 00:10:16.795 "zone_management": false, 00:10:16.795 "zone_append": false, 00:10:16.795 "compare": false, 00:10:16.795 "compare_and_write": false, 00:10:16.795 "abort": true, 00:10:16.795 "seek_hole": false, 00:10:16.795 "seek_data": false, 00:10:16.795 "copy": true, 00:10:16.795 "nvme_iov_md": false 00:10:16.795 }, 00:10:16.795 "memory_domains": [ 00:10:16.795 { 00:10:16.795 "dma_device_id": "system", 00:10:16.795 "dma_device_type": 1 00:10:16.795 }, 00:10:16.795 { 00:10:16.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.795 "dma_device_type": 2 00:10:16.795 } 00:10:16.795 ], 00:10:16.795 "driver_specific": {} 00:10:16.795 } 00:10:16.795 ] 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.795 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.081 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.081 "name": "Existed_Raid", 00:10:17.081 "uuid": "bfa51895-7aaf-4ffa-be55-1ab2efb5d175", 00:10:17.081 "strip_size_kb": 0, 00:10:17.081 "state": "online", 00:10:17.081 "raid_level": "raid1", 00:10:17.081 "superblock": true, 00:10:17.081 "num_base_bdevs": 4, 00:10:17.081 "num_base_bdevs_discovered": 4, 00:10:17.081 "num_base_bdevs_operational": 4, 00:10:17.081 "base_bdevs_list": [ 00:10:17.081 { 00:10:17.081 "name": "BaseBdev1", 00:10:17.081 "uuid": "1aa5cec1-1da8-40e8-9455-f735583909ca", 00:10:17.081 "is_configured": true, 00:10:17.081 "data_offset": 2048, 00:10:17.081 "data_size": 63488 00:10:17.081 }, 00:10:17.081 { 00:10:17.081 "name": "BaseBdev2", 00:10:17.081 "uuid": "ce181cd3-c768-446d-ba54-19ae1f852b70", 00:10:17.081 "is_configured": true, 00:10:17.081 "data_offset": 2048, 00:10:17.081 "data_size": 63488 00:10:17.081 }, 00:10:17.081 { 00:10:17.081 "name": "BaseBdev3", 00:10:17.081 "uuid": "ab07f2a6-6ec9-47a5-823e-a856a013cfe9", 00:10:17.081 "is_configured": true, 00:10:17.081 "data_offset": 2048, 00:10:17.081 "data_size": 63488 00:10:17.081 }, 00:10:17.081 { 00:10:17.081 "name": "BaseBdev4", 00:10:17.081 "uuid": "0435a78b-bd47-4a53-9bff-f5896a8a095a", 00:10:17.081 "is_configured": true, 00:10:17.081 "data_offset": 2048, 00:10:17.081 "data_size": 63488 00:10:17.081 } 00:10:17.081 ] 00:10:17.081 }' 00:10:17.081 04:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.081 04:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.341 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:17.341 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:17.341 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.341 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.341 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.341 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.341 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:17.341 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.341 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.341 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.341 [2024-12-14 04:59:28.074347] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.341 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.341 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.341 "name": "Existed_Raid", 00:10:17.341 "aliases": [ 00:10:17.341 "bfa51895-7aaf-4ffa-be55-1ab2efb5d175" 00:10:17.341 ], 00:10:17.341 "product_name": "Raid Volume", 00:10:17.341 "block_size": 512, 00:10:17.341 "num_blocks": 63488, 00:10:17.341 "uuid": "bfa51895-7aaf-4ffa-be55-1ab2efb5d175", 00:10:17.341 "assigned_rate_limits": { 00:10:17.341 "rw_ios_per_sec": 0, 00:10:17.341 "rw_mbytes_per_sec": 0, 00:10:17.341 "r_mbytes_per_sec": 0, 00:10:17.341 "w_mbytes_per_sec": 0 00:10:17.341 }, 00:10:17.341 "claimed": false, 00:10:17.341 "zoned": false, 00:10:17.341 "supported_io_types": { 00:10:17.341 "read": true, 00:10:17.341 "write": true, 00:10:17.341 "unmap": false, 00:10:17.341 "flush": false, 00:10:17.341 "reset": true, 00:10:17.341 "nvme_admin": false, 00:10:17.341 "nvme_io": false, 00:10:17.341 "nvme_io_md": false, 00:10:17.341 "write_zeroes": true, 00:10:17.341 "zcopy": false, 00:10:17.341 "get_zone_info": false, 00:10:17.341 "zone_management": false, 00:10:17.341 "zone_append": false, 00:10:17.341 "compare": false, 00:10:17.341 "compare_and_write": false, 00:10:17.341 "abort": false, 00:10:17.341 "seek_hole": false, 00:10:17.341 "seek_data": false, 00:10:17.341 "copy": false, 00:10:17.341 "nvme_iov_md": false 00:10:17.341 }, 00:10:17.341 "memory_domains": [ 00:10:17.341 { 00:10:17.341 "dma_device_id": "system", 00:10:17.341 "dma_device_type": 1 00:10:17.341 }, 00:10:17.341 { 00:10:17.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.341 "dma_device_type": 2 00:10:17.341 }, 00:10:17.341 { 00:10:17.341 "dma_device_id": "system", 00:10:17.341 "dma_device_type": 1 00:10:17.341 }, 00:10:17.341 { 00:10:17.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.341 "dma_device_type": 2 00:10:17.341 }, 00:10:17.341 { 00:10:17.341 "dma_device_id": "system", 00:10:17.341 "dma_device_type": 1 00:10:17.341 }, 00:10:17.341 { 00:10:17.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.341 "dma_device_type": 2 00:10:17.341 }, 00:10:17.341 { 00:10:17.341 "dma_device_id": "system", 00:10:17.341 "dma_device_type": 1 00:10:17.341 }, 00:10:17.341 { 00:10:17.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.341 "dma_device_type": 2 00:10:17.341 } 00:10:17.341 ], 00:10:17.341 "driver_specific": { 00:10:17.342 "raid": { 00:10:17.342 "uuid": "bfa51895-7aaf-4ffa-be55-1ab2efb5d175", 00:10:17.342 "strip_size_kb": 0, 00:10:17.342 "state": "online", 00:10:17.342 "raid_level": "raid1", 00:10:17.342 "superblock": true, 00:10:17.342 "num_base_bdevs": 4, 00:10:17.342 "num_base_bdevs_discovered": 4, 00:10:17.342 "num_base_bdevs_operational": 4, 00:10:17.342 "base_bdevs_list": [ 00:10:17.342 { 00:10:17.342 "name": "BaseBdev1", 00:10:17.342 "uuid": "1aa5cec1-1da8-40e8-9455-f735583909ca", 00:10:17.342 "is_configured": true, 00:10:17.342 "data_offset": 2048, 00:10:17.342 "data_size": 63488 00:10:17.342 }, 00:10:17.342 { 00:10:17.342 "name": "BaseBdev2", 00:10:17.342 "uuid": "ce181cd3-c768-446d-ba54-19ae1f852b70", 00:10:17.342 "is_configured": true, 00:10:17.342 "data_offset": 2048, 00:10:17.342 "data_size": 63488 00:10:17.342 }, 00:10:17.342 { 00:10:17.342 "name": "BaseBdev3", 00:10:17.342 "uuid": "ab07f2a6-6ec9-47a5-823e-a856a013cfe9", 00:10:17.342 "is_configured": true, 00:10:17.342 "data_offset": 2048, 00:10:17.342 "data_size": 63488 00:10:17.342 }, 00:10:17.342 { 00:10:17.342 "name": "BaseBdev4", 00:10:17.342 "uuid": "0435a78b-bd47-4a53-9bff-f5896a8a095a", 00:10:17.342 "is_configured": true, 00:10:17.342 "data_offset": 2048, 00:10:17.342 "data_size": 63488 00:10:17.342 } 00:10:17.342 ] 00:10:17.342 } 00:10:17.342 } 00:10:17.342 }' 00:10:17.342 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.342 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:17.342 BaseBdev2 00:10:17.342 BaseBdev3 00:10:17.342 BaseBdev4' 00:10:17.342 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.342 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.342 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.342 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:17.342 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.342 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.342 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.342 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.601 [2024-12-14 04:59:28.393502] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:17.601 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:17.602 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:17.602 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:17.602 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:17.602 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.602 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.602 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.602 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.602 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.602 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.602 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.602 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.602 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.602 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.602 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.602 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.602 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.602 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.602 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.602 "name": "Existed_Raid", 00:10:17.602 "uuid": "bfa51895-7aaf-4ffa-be55-1ab2efb5d175", 00:10:17.602 "strip_size_kb": 0, 00:10:17.602 "state": "online", 00:10:17.602 "raid_level": "raid1", 00:10:17.602 "superblock": true, 00:10:17.602 "num_base_bdevs": 4, 00:10:17.602 "num_base_bdevs_discovered": 3, 00:10:17.602 "num_base_bdevs_operational": 3, 00:10:17.602 "base_bdevs_list": [ 00:10:17.602 { 00:10:17.602 "name": null, 00:10:17.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.602 "is_configured": false, 00:10:17.602 "data_offset": 0, 00:10:17.602 "data_size": 63488 00:10:17.602 }, 00:10:17.602 { 00:10:17.602 "name": "BaseBdev2", 00:10:17.602 "uuid": "ce181cd3-c768-446d-ba54-19ae1f852b70", 00:10:17.602 "is_configured": true, 00:10:17.602 "data_offset": 2048, 00:10:17.602 "data_size": 63488 00:10:17.602 }, 00:10:17.602 { 00:10:17.602 "name": "BaseBdev3", 00:10:17.602 "uuid": "ab07f2a6-6ec9-47a5-823e-a856a013cfe9", 00:10:17.602 "is_configured": true, 00:10:17.602 "data_offset": 2048, 00:10:17.602 "data_size": 63488 00:10:17.602 }, 00:10:17.602 { 00:10:17.602 "name": "BaseBdev4", 00:10:17.602 "uuid": "0435a78b-bd47-4a53-9bff-f5896a8a095a", 00:10:17.602 "is_configured": true, 00:10:17.602 "data_offset": 2048, 00:10:17.602 "data_size": 63488 00:10:17.602 } 00:10:17.602 ] 00:10:17.602 }' 00:10:17.602 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.602 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.169 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:18.169 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.169 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.170 [2024-12-14 04:59:28.860034] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.170 [2024-12-14 04:59:28.919225] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.170 [2024-12-14 04:59:28.974010] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:18.170 [2024-12-14 04:59:28.974113] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:18.170 [2024-12-14 04:59:28.985557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.170 [2024-12-14 04:59:28.985603] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.170 [2024-12-14 04:59:28.985615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.170 04:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:18.170 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.170 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:18.170 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:18.170 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:18.170 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:18.170 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.170 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:18.170 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.170 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.170 BaseBdev2 00:10:18.170 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.170 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:18.170 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:18.170 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:18.170 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.431 [ 00:10:18.431 { 00:10:18.431 "name": "BaseBdev2", 00:10:18.431 "aliases": [ 00:10:18.431 "3d7117ea-de3a-46f3-800b-b023800f7959" 00:10:18.431 ], 00:10:18.431 "product_name": "Malloc disk", 00:10:18.431 "block_size": 512, 00:10:18.431 "num_blocks": 65536, 00:10:18.431 "uuid": "3d7117ea-de3a-46f3-800b-b023800f7959", 00:10:18.431 "assigned_rate_limits": { 00:10:18.431 "rw_ios_per_sec": 0, 00:10:18.431 "rw_mbytes_per_sec": 0, 00:10:18.431 "r_mbytes_per_sec": 0, 00:10:18.431 "w_mbytes_per_sec": 0 00:10:18.431 }, 00:10:18.431 "claimed": false, 00:10:18.431 "zoned": false, 00:10:18.431 "supported_io_types": { 00:10:18.431 "read": true, 00:10:18.431 "write": true, 00:10:18.431 "unmap": true, 00:10:18.431 "flush": true, 00:10:18.431 "reset": true, 00:10:18.431 "nvme_admin": false, 00:10:18.431 "nvme_io": false, 00:10:18.431 "nvme_io_md": false, 00:10:18.431 "write_zeroes": true, 00:10:18.431 "zcopy": true, 00:10:18.431 "get_zone_info": false, 00:10:18.431 "zone_management": false, 00:10:18.431 "zone_append": false, 00:10:18.431 "compare": false, 00:10:18.431 "compare_and_write": false, 00:10:18.431 "abort": true, 00:10:18.431 "seek_hole": false, 00:10:18.431 "seek_data": false, 00:10:18.431 "copy": true, 00:10:18.431 "nvme_iov_md": false 00:10:18.431 }, 00:10:18.431 "memory_domains": [ 00:10:18.431 { 00:10:18.431 "dma_device_id": "system", 00:10:18.431 "dma_device_type": 1 00:10:18.431 }, 00:10:18.431 { 00:10:18.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.431 "dma_device_type": 2 00:10:18.431 } 00:10:18.431 ], 00:10:18.431 "driver_specific": {} 00:10:18.431 } 00:10:18.431 ] 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.431 BaseBdev3 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.431 [ 00:10:18.431 { 00:10:18.431 "name": "BaseBdev3", 00:10:18.431 "aliases": [ 00:10:18.431 "c2d51d3d-62c9-4f0b-bc08-e26d79af4eb9" 00:10:18.431 ], 00:10:18.431 "product_name": "Malloc disk", 00:10:18.431 "block_size": 512, 00:10:18.431 "num_blocks": 65536, 00:10:18.431 "uuid": "c2d51d3d-62c9-4f0b-bc08-e26d79af4eb9", 00:10:18.431 "assigned_rate_limits": { 00:10:18.431 "rw_ios_per_sec": 0, 00:10:18.431 "rw_mbytes_per_sec": 0, 00:10:18.431 "r_mbytes_per_sec": 0, 00:10:18.431 "w_mbytes_per_sec": 0 00:10:18.431 }, 00:10:18.431 "claimed": false, 00:10:18.431 "zoned": false, 00:10:18.431 "supported_io_types": { 00:10:18.431 "read": true, 00:10:18.431 "write": true, 00:10:18.431 "unmap": true, 00:10:18.431 "flush": true, 00:10:18.431 "reset": true, 00:10:18.431 "nvme_admin": false, 00:10:18.431 "nvme_io": false, 00:10:18.431 "nvme_io_md": false, 00:10:18.431 "write_zeroes": true, 00:10:18.431 "zcopy": true, 00:10:18.431 "get_zone_info": false, 00:10:18.431 "zone_management": false, 00:10:18.431 "zone_append": false, 00:10:18.431 "compare": false, 00:10:18.431 "compare_and_write": false, 00:10:18.431 "abort": true, 00:10:18.431 "seek_hole": false, 00:10:18.431 "seek_data": false, 00:10:18.431 "copy": true, 00:10:18.431 "nvme_iov_md": false 00:10:18.431 }, 00:10:18.431 "memory_domains": [ 00:10:18.431 { 00:10:18.431 "dma_device_id": "system", 00:10:18.431 "dma_device_type": 1 00:10:18.431 }, 00:10:18.431 { 00:10:18.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.431 "dma_device_type": 2 00:10:18.431 } 00:10:18.431 ], 00:10:18.431 "driver_specific": {} 00:10:18.431 } 00:10:18.431 ] 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.431 BaseBdev4 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.431 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.431 [ 00:10:18.431 { 00:10:18.431 "name": "BaseBdev4", 00:10:18.431 "aliases": [ 00:10:18.431 "9311f1c3-4fdd-4bae-8055-71b748247f8a" 00:10:18.431 ], 00:10:18.431 "product_name": "Malloc disk", 00:10:18.431 "block_size": 512, 00:10:18.431 "num_blocks": 65536, 00:10:18.431 "uuid": "9311f1c3-4fdd-4bae-8055-71b748247f8a", 00:10:18.431 "assigned_rate_limits": { 00:10:18.431 "rw_ios_per_sec": 0, 00:10:18.431 "rw_mbytes_per_sec": 0, 00:10:18.431 "r_mbytes_per_sec": 0, 00:10:18.431 "w_mbytes_per_sec": 0 00:10:18.431 }, 00:10:18.431 "claimed": false, 00:10:18.431 "zoned": false, 00:10:18.431 "supported_io_types": { 00:10:18.431 "read": true, 00:10:18.431 "write": true, 00:10:18.431 "unmap": true, 00:10:18.431 "flush": true, 00:10:18.431 "reset": true, 00:10:18.431 "nvme_admin": false, 00:10:18.431 "nvme_io": false, 00:10:18.431 "nvme_io_md": false, 00:10:18.431 "write_zeroes": true, 00:10:18.431 "zcopy": true, 00:10:18.431 "get_zone_info": false, 00:10:18.431 "zone_management": false, 00:10:18.431 "zone_append": false, 00:10:18.431 "compare": false, 00:10:18.431 "compare_and_write": false, 00:10:18.431 "abort": true, 00:10:18.431 "seek_hole": false, 00:10:18.431 "seek_data": false, 00:10:18.431 "copy": true, 00:10:18.431 "nvme_iov_md": false 00:10:18.432 }, 00:10:18.432 "memory_domains": [ 00:10:18.432 { 00:10:18.432 "dma_device_id": "system", 00:10:18.432 "dma_device_type": 1 00:10:18.432 }, 00:10:18.432 { 00:10:18.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.432 "dma_device_type": 2 00:10:18.432 } 00:10:18.432 ], 00:10:18.432 "driver_specific": {} 00:10:18.432 } 00:10:18.432 ] 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.432 [2024-12-14 04:59:29.189014] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:18.432 [2024-12-14 04:59:29.189061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:18.432 [2024-12-14 04:59:29.189079] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.432 [2024-12-14 04:59:29.190841] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:18.432 [2024-12-14 04:59:29.190890] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.432 "name": "Existed_Raid", 00:10:18.432 "uuid": "2e817f6a-1b03-47b0-b56d-a79a63a7d7ca", 00:10:18.432 "strip_size_kb": 0, 00:10:18.432 "state": "configuring", 00:10:18.432 "raid_level": "raid1", 00:10:18.432 "superblock": true, 00:10:18.432 "num_base_bdevs": 4, 00:10:18.432 "num_base_bdevs_discovered": 3, 00:10:18.432 "num_base_bdevs_operational": 4, 00:10:18.432 "base_bdevs_list": [ 00:10:18.432 { 00:10:18.432 "name": "BaseBdev1", 00:10:18.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.432 "is_configured": false, 00:10:18.432 "data_offset": 0, 00:10:18.432 "data_size": 0 00:10:18.432 }, 00:10:18.432 { 00:10:18.432 "name": "BaseBdev2", 00:10:18.432 "uuid": "3d7117ea-de3a-46f3-800b-b023800f7959", 00:10:18.432 "is_configured": true, 00:10:18.432 "data_offset": 2048, 00:10:18.432 "data_size": 63488 00:10:18.432 }, 00:10:18.432 { 00:10:18.432 "name": "BaseBdev3", 00:10:18.432 "uuid": "c2d51d3d-62c9-4f0b-bc08-e26d79af4eb9", 00:10:18.432 "is_configured": true, 00:10:18.432 "data_offset": 2048, 00:10:18.432 "data_size": 63488 00:10:18.432 }, 00:10:18.432 { 00:10:18.432 "name": "BaseBdev4", 00:10:18.432 "uuid": "9311f1c3-4fdd-4bae-8055-71b748247f8a", 00:10:18.432 "is_configured": true, 00:10:18.432 "data_offset": 2048, 00:10:18.432 "data_size": 63488 00:10:18.432 } 00:10:18.432 ] 00:10:18.432 }' 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.432 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.001 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:19.001 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.001 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.001 [2024-12-14 04:59:29.648214] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:19.001 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.001 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:19.001 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.001 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.001 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.001 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.001 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.001 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.001 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.001 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.001 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.001 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.001 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.001 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.001 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.001 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.001 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.001 "name": "Existed_Raid", 00:10:19.001 "uuid": "2e817f6a-1b03-47b0-b56d-a79a63a7d7ca", 00:10:19.001 "strip_size_kb": 0, 00:10:19.001 "state": "configuring", 00:10:19.001 "raid_level": "raid1", 00:10:19.001 "superblock": true, 00:10:19.001 "num_base_bdevs": 4, 00:10:19.001 "num_base_bdevs_discovered": 2, 00:10:19.001 "num_base_bdevs_operational": 4, 00:10:19.001 "base_bdevs_list": [ 00:10:19.001 { 00:10:19.001 "name": "BaseBdev1", 00:10:19.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.001 "is_configured": false, 00:10:19.001 "data_offset": 0, 00:10:19.001 "data_size": 0 00:10:19.001 }, 00:10:19.001 { 00:10:19.001 "name": null, 00:10:19.001 "uuid": "3d7117ea-de3a-46f3-800b-b023800f7959", 00:10:19.001 "is_configured": false, 00:10:19.001 "data_offset": 0, 00:10:19.001 "data_size": 63488 00:10:19.001 }, 00:10:19.001 { 00:10:19.001 "name": "BaseBdev3", 00:10:19.001 "uuid": "c2d51d3d-62c9-4f0b-bc08-e26d79af4eb9", 00:10:19.001 "is_configured": true, 00:10:19.001 "data_offset": 2048, 00:10:19.001 "data_size": 63488 00:10:19.001 }, 00:10:19.001 { 00:10:19.001 "name": "BaseBdev4", 00:10:19.001 "uuid": "9311f1c3-4fdd-4bae-8055-71b748247f8a", 00:10:19.001 "is_configured": true, 00:10:19.001 "data_offset": 2048, 00:10:19.001 "data_size": 63488 00:10:19.002 } 00:10:19.002 ] 00:10:19.002 }' 00:10:19.002 04:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.002 04:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.262 [2024-12-14 04:59:30.066412] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.262 BaseBdev1 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.262 [ 00:10:19.262 { 00:10:19.262 "name": "BaseBdev1", 00:10:19.262 "aliases": [ 00:10:19.262 "1341fdef-ab27-4b68-99a3-866a81374242" 00:10:19.262 ], 00:10:19.262 "product_name": "Malloc disk", 00:10:19.262 "block_size": 512, 00:10:19.262 "num_blocks": 65536, 00:10:19.262 "uuid": "1341fdef-ab27-4b68-99a3-866a81374242", 00:10:19.262 "assigned_rate_limits": { 00:10:19.262 "rw_ios_per_sec": 0, 00:10:19.262 "rw_mbytes_per_sec": 0, 00:10:19.262 "r_mbytes_per_sec": 0, 00:10:19.262 "w_mbytes_per_sec": 0 00:10:19.262 }, 00:10:19.262 "claimed": true, 00:10:19.262 "claim_type": "exclusive_write", 00:10:19.262 "zoned": false, 00:10:19.262 "supported_io_types": { 00:10:19.262 "read": true, 00:10:19.262 "write": true, 00:10:19.262 "unmap": true, 00:10:19.262 "flush": true, 00:10:19.262 "reset": true, 00:10:19.262 "nvme_admin": false, 00:10:19.262 "nvme_io": false, 00:10:19.262 "nvme_io_md": false, 00:10:19.262 "write_zeroes": true, 00:10:19.262 "zcopy": true, 00:10:19.262 "get_zone_info": false, 00:10:19.262 "zone_management": false, 00:10:19.262 "zone_append": false, 00:10:19.262 "compare": false, 00:10:19.262 "compare_and_write": false, 00:10:19.262 "abort": true, 00:10:19.262 "seek_hole": false, 00:10:19.262 "seek_data": false, 00:10:19.262 "copy": true, 00:10:19.262 "nvme_iov_md": false 00:10:19.262 }, 00:10:19.262 "memory_domains": [ 00:10:19.262 { 00:10:19.262 "dma_device_id": "system", 00:10:19.262 "dma_device_type": 1 00:10:19.262 }, 00:10:19.262 { 00:10:19.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.262 "dma_device_type": 2 00:10:19.262 } 00:10:19.262 ], 00:10:19.262 "driver_specific": {} 00:10:19.262 } 00:10:19.262 ] 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.262 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.522 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.522 "name": "Existed_Raid", 00:10:19.522 "uuid": "2e817f6a-1b03-47b0-b56d-a79a63a7d7ca", 00:10:19.522 "strip_size_kb": 0, 00:10:19.522 "state": "configuring", 00:10:19.522 "raid_level": "raid1", 00:10:19.522 "superblock": true, 00:10:19.522 "num_base_bdevs": 4, 00:10:19.522 "num_base_bdevs_discovered": 3, 00:10:19.522 "num_base_bdevs_operational": 4, 00:10:19.522 "base_bdevs_list": [ 00:10:19.522 { 00:10:19.522 "name": "BaseBdev1", 00:10:19.522 "uuid": "1341fdef-ab27-4b68-99a3-866a81374242", 00:10:19.522 "is_configured": true, 00:10:19.522 "data_offset": 2048, 00:10:19.522 "data_size": 63488 00:10:19.522 }, 00:10:19.522 { 00:10:19.522 "name": null, 00:10:19.522 "uuid": "3d7117ea-de3a-46f3-800b-b023800f7959", 00:10:19.522 "is_configured": false, 00:10:19.522 "data_offset": 0, 00:10:19.522 "data_size": 63488 00:10:19.522 }, 00:10:19.522 { 00:10:19.522 "name": "BaseBdev3", 00:10:19.522 "uuid": "c2d51d3d-62c9-4f0b-bc08-e26d79af4eb9", 00:10:19.522 "is_configured": true, 00:10:19.522 "data_offset": 2048, 00:10:19.522 "data_size": 63488 00:10:19.522 }, 00:10:19.522 { 00:10:19.522 "name": "BaseBdev4", 00:10:19.522 "uuid": "9311f1c3-4fdd-4bae-8055-71b748247f8a", 00:10:19.522 "is_configured": true, 00:10:19.522 "data_offset": 2048, 00:10:19.522 "data_size": 63488 00:10:19.522 } 00:10:19.522 ] 00:10:19.522 }' 00:10:19.522 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.522 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.782 [2024-12-14 04:59:30.585547] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.782 "name": "Existed_Raid", 00:10:19.782 "uuid": "2e817f6a-1b03-47b0-b56d-a79a63a7d7ca", 00:10:19.782 "strip_size_kb": 0, 00:10:19.782 "state": "configuring", 00:10:19.782 "raid_level": "raid1", 00:10:19.782 "superblock": true, 00:10:19.782 "num_base_bdevs": 4, 00:10:19.782 "num_base_bdevs_discovered": 2, 00:10:19.782 "num_base_bdevs_operational": 4, 00:10:19.782 "base_bdevs_list": [ 00:10:19.782 { 00:10:19.782 "name": "BaseBdev1", 00:10:19.782 "uuid": "1341fdef-ab27-4b68-99a3-866a81374242", 00:10:19.782 "is_configured": true, 00:10:19.782 "data_offset": 2048, 00:10:19.782 "data_size": 63488 00:10:19.782 }, 00:10:19.782 { 00:10:19.782 "name": null, 00:10:19.782 "uuid": "3d7117ea-de3a-46f3-800b-b023800f7959", 00:10:19.782 "is_configured": false, 00:10:19.782 "data_offset": 0, 00:10:19.782 "data_size": 63488 00:10:19.782 }, 00:10:19.782 { 00:10:19.782 "name": null, 00:10:19.782 "uuid": "c2d51d3d-62c9-4f0b-bc08-e26d79af4eb9", 00:10:19.782 "is_configured": false, 00:10:19.782 "data_offset": 0, 00:10:19.782 "data_size": 63488 00:10:19.782 }, 00:10:19.782 { 00:10:19.782 "name": "BaseBdev4", 00:10:19.782 "uuid": "9311f1c3-4fdd-4bae-8055-71b748247f8a", 00:10:19.782 "is_configured": true, 00:10:19.782 "data_offset": 2048, 00:10:19.782 "data_size": 63488 00:10:19.782 } 00:10:19.782 ] 00:10:19.782 }' 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.782 04:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.351 [2024-12-14 04:59:31.068740] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.351 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.351 "name": "Existed_Raid", 00:10:20.351 "uuid": "2e817f6a-1b03-47b0-b56d-a79a63a7d7ca", 00:10:20.352 "strip_size_kb": 0, 00:10:20.352 "state": "configuring", 00:10:20.352 "raid_level": "raid1", 00:10:20.352 "superblock": true, 00:10:20.352 "num_base_bdevs": 4, 00:10:20.352 "num_base_bdevs_discovered": 3, 00:10:20.352 "num_base_bdevs_operational": 4, 00:10:20.352 "base_bdevs_list": [ 00:10:20.352 { 00:10:20.352 "name": "BaseBdev1", 00:10:20.352 "uuid": "1341fdef-ab27-4b68-99a3-866a81374242", 00:10:20.352 "is_configured": true, 00:10:20.352 "data_offset": 2048, 00:10:20.352 "data_size": 63488 00:10:20.352 }, 00:10:20.352 { 00:10:20.352 "name": null, 00:10:20.352 "uuid": "3d7117ea-de3a-46f3-800b-b023800f7959", 00:10:20.352 "is_configured": false, 00:10:20.352 "data_offset": 0, 00:10:20.352 "data_size": 63488 00:10:20.352 }, 00:10:20.352 { 00:10:20.352 "name": "BaseBdev3", 00:10:20.352 "uuid": "c2d51d3d-62c9-4f0b-bc08-e26d79af4eb9", 00:10:20.352 "is_configured": true, 00:10:20.352 "data_offset": 2048, 00:10:20.352 "data_size": 63488 00:10:20.352 }, 00:10:20.352 { 00:10:20.352 "name": "BaseBdev4", 00:10:20.352 "uuid": "9311f1c3-4fdd-4bae-8055-71b748247f8a", 00:10:20.352 "is_configured": true, 00:10:20.352 "data_offset": 2048, 00:10:20.352 "data_size": 63488 00:10:20.352 } 00:10:20.352 ] 00:10:20.352 }' 00:10:20.352 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.352 04:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.921 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:20.921 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.921 04:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.921 04:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.921 04:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.921 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:20.921 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:20.921 04:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.921 04:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.921 [2024-12-14 04:59:31.575912] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:20.921 04:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.921 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:20.921 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.921 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.921 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.921 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.921 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.922 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.922 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.922 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.922 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.922 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.922 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.922 04:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.922 04:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.922 04:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.922 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.922 "name": "Existed_Raid", 00:10:20.922 "uuid": "2e817f6a-1b03-47b0-b56d-a79a63a7d7ca", 00:10:20.922 "strip_size_kb": 0, 00:10:20.922 "state": "configuring", 00:10:20.922 "raid_level": "raid1", 00:10:20.922 "superblock": true, 00:10:20.922 "num_base_bdevs": 4, 00:10:20.922 "num_base_bdevs_discovered": 2, 00:10:20.922 "num_base_bdevs_operational": 4, 00:10:20.922 "base_bdevs_list": [ 00:10:20.922 { 00:10:20.922 "name": null, 00:10:20.922 "uuid": "1341fdef-ab27-4b68-99a3-866a81374242", 00:10:20.922 "is_configured": false, 00:10:20.922 "data_offset": 0, 00:10:20.922 "data_size": 63488 00:10:20.922 }, 00:10:20.922 { 00:10:20.922 "name": null, 00:10:20.922 "uuid": "3d7117ea-de3a-46f3-800b-b023800f7959", 00:10:20.922 "is_configured": false, 00:10:20.922 "data_offset": 0, 00:10:20.922 "data_size": 63488 00:10:20.922 }, 00:10:20.922 { 00:10:20.922 "name": "BaseBdev3", 00:10:20.922 "uuid": "c2d51d3d-62c9-4f0b-bc08-e26d79af4eb9", 00:10:20.922 "is_configured": true, 00:10:20.922 "data_offset": 2048, 00:10:20.922 "data_size": 63488 00:10:20.922 }, 00:10:20.922 { 00:10:20.922 "name": "BaseBdev4", 00:10:20.922 "uuid": "9311f1c3-4fdd-4bae-8055-71b748247f8a", 00:10:20.922 "is_configured": true, 00:10:20.922 "data_offset": 2048, 00:10:20.922 "data_size": 63488 00:10:20.922 } 00:10:20.922 ] 00:10:20.922 }' 00:10:20.922 04:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.922 04:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.182 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.182 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.182 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.182 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:21.182 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.182 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:21.182 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:21.182 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.182 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.182 [2024-12-14 04:59:32.053520] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.182 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.182 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:21.182 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.182 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.182 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.182 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.182 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.182 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.182 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.182 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.182 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.442 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.442 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.442 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.442 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.442 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.442 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.442 "name": "Existed_Raid", 00:10:21.442 "uuid": "2e817f6a-1b03-47b0-b56d-a79a63a7d7ca", 00:10:21.442 "strip_size_kb": 0, 00:10:21.442 "state": "configuring", 00:10:21.442 "raid_level": "raid1", 00:10:21.442 "superblock": true, 00:10:21.442 "num_base_bdevs": 4, 00:10:21.442 "num_base_bdevs_discovered": 3, 00:10:21.442 "num_base_bdevs_operational": 4, 00:10:21.442 "base_bdevs_list": [ 00:10:21.442 { 00:10:21.442 "name": null, 00:10:21.442 "uuid": "1341fdef-ab27-4b68-99a3-866a81374242", 00:10:21.442 "is_configured": false, 00:10:21.442 "data_offset": 0, 00:10:21.442 "data_size": 63488 00:10:21.442 }, 00:10:21.442 { 00:10:21.442 "name": "BaseBdev2", 00:10:21.442 "uuid": "3d7117ea-de3a-46f3-800b-b023800f7959", 00:10:21.442 "is_configured": true, 00:10:21.442 "data_offset": 2048, 00:10:21.442 "data_size": 63488 00:10:21.442 }, 00:10:21.442 { 00:10:21.442 "name": "BaseBdev3", 00:10:21.442 "uuid": "c2d51d3d-62c9-4f0b-bc08-e26d79af4eb9", 00:10:21.442 "is_configured": true, 00:10:21.442 "data_offset": 2048, 00:10:21.442 "data_size": 63488 00:10:21.442 }, 00:10:21.442 { 00:10:21.442 "name": "BaseBdev4", 00:10:21.442 "uuid": "9311f1c3-4fdd-4bae-8055-71b748247f8a", 00:10:21.442 "is_configured": true, 00:10:21.442 "data_offset": 2048, 00:10:21.442 "data_size": 63488 00:10:21.442 } 00:10:21.442 ] 00:10:21.442 }' 00:10:21.442 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.442 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.702 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:21.702 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.702 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.702 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.702 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.702 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:21.702 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1341fdef-ab27-4b68-99a3-866a81374242 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.703 [2024-12-14 04:59:32.531639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:21.703 [2024-12-14 04:59:32.531928] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:21.703 [2024-12-14 04:59:32.531952] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:21.703 [2024-12-14 04:59:32.532213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:21.703 NewBaseBdev 00:10:21.703 [2024-12-14 04:59:32.532347] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:21.703 [2024-12-14 04:59:32.532356] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:21.703 [2024-12-14 04:59:32.532454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.703 [ 00:10:21.703 { 00:10:21.703 "name": "NewBaseBdev", 00:10:21.703 "aliases": [ 00:10:21.703 "1341fdef-ab27-4b68-99a3-866a81374242" 00:10:21.703 ], 00:10:21.703 "product_name": "Malloc disk", 00:10:21.703 "block_size": 512, 00:10:21.703 "num_blocks": 65536, 00:10:21.703 "uuid": "1341fdef-ab27-4b68-99a3-866a81374242", 00:10:21.703 "assigned_rate_limits": { 00:10:21.703 "rw_ios_per_sec": 0, 00:10:21.703 "rw_mbytes_per_sec": 0, 00:10:21.703 "r_mbytes_per_sec": 0, 00:10:21.703 "w_mbytes_per_sec": 0 00:10:21.703 }, 00:10:21.703 "claimed": true, 00:10:21.703 "claim_type": "exclusive_write", 00:10:21.703 "zoned": false, 00:10:21.703 "supported_io_types": { 00:10:21.703 "read": true, 00:10:21.703 "write": true, 00:10:21.703 "unmap": true, 00:10:21.703 "flush": true, 00:10:21.703 "reset": true, 00:10:21.703 "nvme_admin": false, 00:10:21.703 "nvme_io": false, 00:10:21.703 "nvme_io_md": false, 00:10:21.703 "write_zeroes": true, 00:10:21.703 "zcopy": true, 00:10:21.703 "get_zone_info": false, 00:10:21.703 "zone_management": false, 00:10:21.703 "zone_append": false, 00:10:21.703 "compare": false, 00:10:21.703 "compare_and_write": false, 00:10:21.703 "abort": true, 00:10:21.703 "seek_hole": false, 00:10:21.703 "seek_data": false, 00:10:21.703 "copy": true, 00:10:21.703 "nvme_iov_md": false 00:10:21.703 }, 00:10:21.703 "memory_domains": [ 00:10:21.703 { 00:10:21.703 "dma_device_id": "system", 00:10:21.703 "dma_device_type": 1 00:10:21.703 }, 00:10:21.703 { 00:10:21.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.703 "dma_device_type": 2 00:10:21.703 } 00:10:21.703 ], 00:10:21.703 "driver_specific": {} 00:10:21.703 } 00:10:21.703 ] 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.703 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.963 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.963 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.963 "name": "Existed_Raid", 00:10:21.963 "uuid": "2e817f6a-1b03-47b0-b56d-a79a63a7d7ca", 00:10:21.963 "strip_size_kb": 0, 00:10:21.963 "state": "online", 00:10:21.963 "raid_level": "raid1", 00:10:21.963 "superblock": true, 00:10:21.963 "num_base_bdevs": 4, 00:10:21.963 "num_base_bdevs_discovered": 4, 00:10:21.963 "num_base_bdevs_operational": 4, 00:10:21.963 "base_bdevs_list": [ 00:10:21.963 { 00:10:21.963 "name": "NewBaseBdev", 00:10:21.963 "uuid": "1341fdef-ab27-4b68-99a3-866a81374242", 00:10:21.963 "is_configured": true, 00:10:21.963 "data_offset": 2048, 00:10:21.963 "data_size": 63488 00:10:21.963 }, 00:10:21.963 { 00:10:21.963 "name": "BaseBdev2", 00:10:21.963 "uuid": "3d7117ea-de3a-46f3-800b-b023800f7959", 00:10:21.963 "is_configured": true, 00:10:21.963 "data_offset": 2048, 00:10:21.963 "data_size": 63488 00:10:21.963 }, 00:10:21.963 { 00:10:21.963 "name": "BaseBdev3", 00:10:21.963 "uuid": "c2d51d3d-62c9-4f0b-bc08-e26d79af4eb9", 00:10:21.963 "is_configured": true, 00:10:21.963 "data_offset": 2048, 00:10:21.963 "data_size": 63488 00:10:21.963 }, 00:10:21.963 { 00:10:21.963 "name": "BaseBdev4", 00:10:21.963 "uuid": "9311f1c3-4fdd-4bae-8055-71b748247f8a", 00:10:21.963 "is_configured": true, 00:10:21.963 "data_offset": 2048, 00:10:21.963 "data_size": 63488 00:10:21.963 } 00:10:21.963 ] 00:10:21.963 }' 00:10:21.963 04:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.963 04:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.223 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:22.223 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:22.223 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:22.223 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:22.223 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:22.223 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:22.223 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:22.223 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.223 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.223 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:22.223 [2024-12-14 04:59:33.043104] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.223 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.223 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:22.223 "name": "Existed_Raid", 00:10:22.223 "aliases": [ 00:10:22.223 "2e817f6a-1b03-47b0-b56d-a79a63a7d7ca" 00:10:22.223 ], 00:10:22.223 "product_name": "Raid Volume", 00:10:22.223 "block_size": 512, 00:10:22.223 "num_blocks": 63488, 00:10:22.223 "uuid": "2e817f6a-1b03-47b0-b56d-a79a63a7d7ca", 00:10:22.223 "assigned_rate_limits": { 00:10:22.223 "rw_ios_per_sec": 0, 00:10:22.223 "rw_mbytes_per_sec": 0, 00:10:22.223 "r_mbytes_per_sec": 0, 00:10:22.223 "w_mbytes_per_sec": 0 00:10:22.223 }, 00:10:22.223 "claimed": false, 00:10:22.223 "zoned": false, 00:10:22.223 "supported_io_types": { 00:10:22.223 "read": true, 00:10:22.223 "write": true, 00:10:22.223 "unmap": false, 00:10:22.223 "flush": false, 00:10:22.223 "reset": true, 00:10:22.223 "nvme_admin": false, 00:10:22.223 "nvme_io": false, 00:10:22.223 "nvme_io_md": false, 00:10:22.223 "write_zeroes": true, 00:10:22.223 "zcopy": false, 00:10:22.223 "get_zone_info": false, 00:10:22.223 "zone_management": false, 00:10:22.223 "zone_append": false, 00:10:22.223 "compare": false, 00:10:22.223 "compare_and_write": false, 00:10:22.223 "abort": false, 00:10:22.223 "seek_hole": false, 00:10:22.223 "seek_data": false, 00:10:22.223 "copy": false, 00:10:22.223 "nvme_iov_md": false 00:10:22.223 }, 00:10:22.223 "memory_domains": [ 00:10:22.223 { 00:10:22.223 "dma_device_id": "system", 00:10:22.223 "dma_device_type": 1 00:10:22.223 }, 00:10:22.223 { 00:10:22.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.223 "dma_device_type": 2 00:10:22.223 }, 00:10:22.223 { 00:10:22.223 "dma_device_id": "system", 00:10:22.223 "dma_device_type": 1 00:10:22.223 }, 00:10:22.223 { 00:10:22.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.223 "dma_device_type": 2 00:10:22.223 }, 00:10:22.223 { 00:10:22.223 "dma_device_id": "system", 00:10:22.223 "dma_device_type": 1 00:10:22.223 }, 00:10:22.223 { 00:10:22.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.224 "dma_device_type": 2 00:10:22.224 }, 00:10:22.224 { 00:10:22.224 "dma_device_id": "system", 00:10:22.224 "dma_device_type": 1 00:10:22.224 }, 00:10:22.224 { 00:10:22.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.224 "dma_device_type": 2 00:10:22.224 } 00:10:22.224 ], 00:10:22.224 "driver_specific": { 00:10:22.224 "raid": { 00:10:22.224 "uuid": "2e817f6a-1b03-47b0-b56d-a79a63a7d7ca", 00:10:22.224 "strip_size_kb": 0, 00:10:22.224 "state": "online", 00:10:22.224 "raid_level": "raid1", 00:10:22.224 "superblock": true, 00:10:22.224 "num_base_bdevs": 4, 00:10:22.224 "num_base_bdevs_discovered": 4, 00:10:22.224 "num_base_bdevs_operational": 4, 00:10:22.224 "base_bdevs_list": [ 00:10:22.224 { 00:10:22.224 "name": "NewBaseBdev", 00:10:22.224 "uuid": "1341fdef-ab27-4b68-99a3-866a81374242", 00:10:22.224 "is_configured": true, 00:10:22.224 "data_offset": 2048, 00:10:22.224 "data_size": 63488 00:10:22.224 }, 00:10:22.224 { 00:10:22.224 "name": "BaseBdev2", 00:10:22.224 "uuid": "3d7117ea-de3a-46f3-800b-b023800f7959", 00:10:22.224 "is_configured": true, 00:10:22.224 "data_offset": 2048, 00:10:22.224 "data_size": 63488 00:10:22.224 }, 00:10:22.224 { 00:10:22.224 "name": "BaseBdev3", 00:10:22.224 "uuid": "c2d51d3d-62c9-4f0b-bc08-e26d79af4eb9", 00:10:22.224 "is_configured": true, 00:10:22.224 "data_offset": 2048, 00:10:22.224 "data_size": 63488 00:10:22.224 }, 00:10:22.224 { 00:10:22.224 "name": "BaseBdev4", 00:10:22.224 "uuid": "9311f1c3-4fdd-4bae-8055-71b748247f8a", 00:10:22.224 "is_configured": true, 00:10:22.224 "data_offset": 2048, 00:10:22.224 "data_size": 63488 00:10:22.224 } 00:10:22.224 ] 00:10:22.224 } 00:10:22.224 } 00:10:22.224 }' 00:10:22.224 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:22.484 BaseBdev2 00:10:22.484 BaseBdev3 00:10:22.484 BaseBdev4' 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.484 [2024-12-14 04:59:33.346259] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:22.484 [2024-12-14 04:59:33.346326] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.484 [2024-12-14 04:59:33.346434] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.484 [2024-12-14 04:59:33.346741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.484 [2024-12-14 04:59:33.346816] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84630 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84630 ']' 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 84630 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:22.484 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84630 00:10:22.744 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:22.744 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:22.744 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84630' 00:10:22.744 killing process with pid 84630 00:10:22.744 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 84630 00:10:22.744 [2024-12-14 04:59:33.383716] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.744 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 84630 00:10:22.744 [2024-12-14 04:59:33.423595] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.005 04:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:23.005 00:10:23.005 real 0m9.326s 00:10:23.005 user 0m15.966s 00:10:23.005 sys 0m1.962s 00:10:23.005 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.005 04:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.005 ************************************ 00:10:23.005 END TEST raid_state_function_test_sb 00:10:23.005 ************************************ 00:10:23.005 04:59:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:10:23.005 04:59:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:23.005 04:59:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.005 04:59:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.005 ************************************ 00:10:23.005 START TEST raid_superblock_test 00:10:23.005 ************************************ 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85278 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85278 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 85278 ']' 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:23.005 04:59:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.005 [2024-12-14 04:59:33.830115] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:23.005 [2024-12-14 04:59:33.830291] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85278 ] 00:10:23.271 [2024-12-14 04:59:33.974736] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.271 [2024-12-14 04:59:34.020466] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.271 [2024-12-14 04:59:34.062398] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.271 [2024-12-14 04:59:34.062435] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.866 malloc1 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.866 [2024-12-14 04:59:34.676674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:23.866 [2024-12-14 04:59:34.676802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.866 [2024-12-14 04:59:34.676843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:23.866 [2024-12-14 04:59:34.676889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.866 [2024-12-14 04:59:34.678975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.866 [2024-12-14 04:59:34.679048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:23.866 pt1 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.866 malloc2 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.866 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.866 [2024-12-14 04:59:34.723947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:23.867 [2024-12-14 04:59:34.724059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.867 [2024-12-14 04:59:34.724097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:23.867 [2024-12-14 04:59:34.724122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.867 [2024-12-14 04:59:34.728949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.867 [2024-12-14 04:59:34.729029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:23.867 pt2 00:10:23.867 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.867 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:23.867 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:23.867 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:23.867 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:23.867 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:23.867 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:23.867 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:23.867 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:23.867 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:23.867 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.867 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.127 malloc3 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.127 [2024-12-14 04:59:34.758327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:24.127 [2024-12-14 04:59:34.758432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.127 [2024-12-14 04:59:34.758467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:24.127 [2024-12-14 04:59:34.758498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.127 [2024-12-14 04:59:34.760542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.127 [2024-12-14 04:59:34.760614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:24.127 pt3 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.127 malloc4 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.127 [2024-12-14 04:59:34.790652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:24.127 [2024-12-14 04:59:34.790755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.127 [2024-12-14 04:59:34.790786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:24.127 [2024-12-14 04:59:34.790819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.127 [2024-12-14 04:59:34.792870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.127 [2024-12-14 04:59:34.792942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:24.127 pt4 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.127 [2024-12-14 04:59:34.802703] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:24.127 [2024-12-14 04:59:34.804514] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:24.127 [2024-12-14 04:59:34.804571] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:24.127 [2024-12-14 04:59:34.804611] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:24.127 [2024-12-14 04:59:34.804759] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:24.127 [2024-12-14 04:59:34.804774] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:24.127 [2024-12-14 04:59:34.805028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:24.127 [2024-12-14 04:59:34.805179] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:24.127 [2024-12-14 04:59:34.805191] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:24.127 [2024-12-14 04:59:34.805311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.127 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.128 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.128 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.128 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.128 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.128 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.128 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.128 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.128 "name": "raid_bdev1", 00:10:24.128 "uuid": "f71e06ee-d3a4-4cee-8804-eccdc2011269", 00:10:24.128 "strip_size_kb": 0, 00:10:24.128 "state": "online", 00:10:24.128 "raid_level": "raid1", 00:10:24.128 "superblock": true, 00:10:24.128 "num_base_bdevs": 4, 00:10:24.128 "num_base_bdevs_discovered": 4, 00:10:24.128 "num_base_bdevs_operational": 4, 00:10:24.128 "base_bdevs_list": [ 00:10:24.128 { 00:10:24.128 "name": "pt1", 00:10:24.128 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.128 "is_configured": true, 00:10:24.128 "data_offset": 2048, 00:10:24.128 "data_size": 63488 00:10:24.128 }, 00:10:24.128 { 00:10:24.128 "name": "pt2", 00:10:24.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.128 "is_configured": true, 00:10:24.128 "data_offset": 2048, 00:10:24.128 "data_size": 63488 00:10:24.128 }, 00:10:24.128 { 00:10:24.128 "name": "pt3", 00:10:24.128 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.128 "is_configured": true, 00:10:24.128 "data_offset": 2048, 00:10:24.128 "data_size": 63488 00:10:24.128 }, 00:10:24.128 { 00:10:24.128 "name": "pt4", 00:10:24.128 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:24.128 "is_configured": true, 00:10:24.128 "data_offset": 2048, 00:10:24.128 "data_size": 63488 00:10:24.128 } 00:10:24.128 ] 00:10:24.128 }' 00:10:24.128 04:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.128 04:59:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.386 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:24.386 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:24.386 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:24.386 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:24.386 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:24.386 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:24.386 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:24.386 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:24.386 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.386 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.386 [2024-12-14 04:59:35.222243] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.386 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.386 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:24.386 "name": "raid_bdev1", 00:10:24.386 "aliases": [ 00:10:24.386 "f71e06ee-d3a4-4cee-8804-eccdc2011269" 00:10:24.386 ], 00:10:24.386 "product_name": "Raid Volume", 00:10:24.386 "block_size": 512, 00:10:24.386 "num_blocks": 63488, 00:10:24.386 "uuid": "f71e06ee-d3a4-4cee-8804-eccdc2011269", 00:10:24.386 "assigned_rate_limits": { 00:10:24.386 "rw_ios_per_sec": 0, 00:10:24.386 "rw_mbytes_per_sec": 0, 00:10:24.386 "r_mbytes_per_sec": 0, 00:10:24.386 "w_mbytes_per_sec": 0 00:10:24.386 }, 00:10:24.386 "claimed": false, 00:10:24.386 "zoned": false, 00:10:24.386 "supported_io_types": { 00:10:24.386 "read": true, 00:10:24.386 "write": true, 00:10:24.386 "unmap": false, 00:10:24.386 "flush": false, 00:10:24.386 "reset": true, 00:10:24.386 "nvme_admin": false, 00:10:24.386 "nvme_io": false, 00:10:24.386 "nvme_io_md": false, 00:10:24.386 "write_zeroes": true, 00:10:24.386 "zcopy": false, 00:10:24.386 "get_zone_info": false, 00:10:24.386 "zone_management": false, 00:10:24.386 "zone_append": false, 00:10:24.386 "compare": false, 00:10:24.386 "compare_and_write": false, 00:10:24.386 "abort": false, 00:10:24.386 "seek_hole": false, 00:10:24.386 "seek_data": false, 00:10:24.386 "copy": false, 00:10:24.386 "nvme_iov_md": false 00:10:24.386 }, 00:10:24.386 "memory_domains": [ 00:10:24.386 { 00:10:24.386 "dma_device_id": "system", 00:10:24.386 "dma_device_type": 1 00:10:24.386 }, 00:10:24.386 { 00:10:24.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.386 "dma_device_type": 2 00:10:24.386 }, 00:10:24.386 { 00:10:24.386 "dma_device_id": "system", 00:10:24.386 "dma_device_type": 1 00:10:24.386 }, 00:10:24.386 { 00:10:24.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.386 "dma_device_type": 2 00:10:24.386 }, 00:10:24.386 { 00:10:24.386 "dma_device_id": "system", 00:10:24.386 "dma_device_type": 1 00:10:24.386 }, 00:10:24.386 { 00:10:24.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.386 "dma_device_type": 2 00:10:24.386 }, 00:10:24.386 { 00:10:24.386 "dma_device_id": "system", 00:10:24.386 "dma_device_type": 1 00:10:24.386 }, 00:10:24.387 { 00:10:24.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.387 "dma_device_type": 2 00:10:24.387 } 00:10:24.387 ], 00:10:24.387 "driver_specific": { 00:10:24.387 "raid": { 00:10:24.387 "uuid": "f71e06ee-d3a4-4cee-8804-eccdc2011269", 00:10:24.387 "strip_size_kb": 0, 00:10:24.387 "state": "online", 00:10:24.387 "raid_level": "raid1", 00:10:24.387 "superblock": true, 00:10:24.387 "num_base_bdevs": 4, 00:10:24.387 "num_base_bdevs_discovered": 4, 00:10:24.387 "num_base_bdevs_operational": 4, 00:10:24.387 "base_bdevs_list": [ 00:10:24.387 { 00:10:24.387 "name": "pt1", 00:10:24.387 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.387 "is_configured": true, 00:10:24.387 "data_offset": 2048, 00:10:24.387 "data_size": 63488 00:10:24.387 }, 00:10:24.387 { 00:10:24.387 "name": "pt2", 00:10:24.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.387 "is_configured": true, 00:10:24.387 "data_offset": 2048, 00:10:24.387 "data_size": 63488 00:10:24.387 }, 00:10:24.387 { 00:10:24.387 "name": "pt3", 00:10:24.387 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.387 "is_configured": true, 00:10:24.387 "data_offset": 2048, 00:10:24.387 "data_size": 63488 00:10:24.387 }, 00:10:24.387 { 00:10:24.387 "name": "pt4", 00:10:24.387 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:24.387 "is_configured": true, 00:10:24.387 "data_offset": 2048, 00:10:24.387 "data_size": 63488 00:10:24.387 } 00:10:24.387 ] 00:10:24.387 } 00:10:24.387 } 00:10:24.387 }' 00:10:24.387 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:24.646 pt2 00:10:24.646 pt3 00:10:24.646 pt4' 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:24.646 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.906 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.906 [2024-12-14 04:59:35.533654] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.906 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.906 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f71e06ee-d3a4-4cee-8804-eccdc2011269 00:10:24.906 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f71e06ee-d3a4-4cee-8804-eccdc2011269 ']' 00:10:24.906 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:24.906 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.906 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.906 [2024-12-14 04:59:35.577297] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.906 [2024-12-14 04:59:35.577324] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.906 [2024-12-14 04:59:35.577406] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.906 [2024-12-14 04:59:35.577491] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.906 [2024-12-14 04:59:35.577501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:24.906 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.906 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:24.906 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.906 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.906 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.906 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.906 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:24.906 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:24.906 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:24.906 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:24.906 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.906 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.906 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.906 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:24.906 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.907 [2024-12-14 04:59:35.737066] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:24.907 [2024-12-14 04:59:35.738842] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:24.907 [2024-12-14 04:59:35.738886] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:24.907 [2024-12-14 04:59:35.738913] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:24.907 [2024-12-14 04:59:35.738956] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:24.907 [2024-12-14 04:59:35.738997] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:24.907 [2024-12-14 04:59:35.739031] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:24.907 [2024-12-14 04:59:35.739047] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:24.907 [2024-12-14 04:59:35.739059] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.907 [2024-12-14 04:59:35.739069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:24.907 request: 00:10:24.907 { 00:10:24.907 "name": "raid_bdev1", 00:10:24.907 "raid_level": "raid1", 00:10:24.907 "base_bdevs": [ 00:10:24.907 "malloc1", 00:10:24.907 "malloc2", 00:10:24.907 "malloc3", 00:10:24.907 "malloc4" 00:10:24.907 ], 00:10:24.907 "superblock": false, 00:10:24.907 "method": "bdev_raid_create", 00:10:24.907 "req_id": 1 00:10:24.907 } 00:10:24.907 Got JSON-RPC error response 00:10:24.907 response: 00:10:24.907 { 00:10:24.907 "code": -17, 00:10:24.907 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:24.907 } 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.907 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.167 [2024-12-14 04:59:35.804911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:25.167 [2024-12-14 04:59:35.804994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.167 [2024-12-14 04:59:35.805030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:25.167 [2024-12-14 04:59:35.805057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.167 [2024-12-14 04:59:35.807124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.167 [2024-12-14 04:59:35.807212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:25.167 [2024-12-14 04:59:35.807309] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:25.167 [2024-12-14 04:59:35.807404] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:25.167 pt1 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.167 "name": "raid_bdev1", 00:10:25.167 "uuid": "f71e06ee-d3a4-4cee-8804-eccdc2011269", 00:10:25.167 "strip_size_kb": 0, 00:10:25.167 "state": "configuring", 00:10:25.167 "raid_level": "raid1", 00:10:25.167 "superblock": true, 00:10:25.167 "num_base_bdevs": 4, 00:10:25.167 "num_base_bdevs_discovered": 1, 00:10:25.167 "num_base_bdevs_operational": 4, 00:10:25.167 "base_bdevs_list": [ 00:10:25.167 { 00:10:25.167 "name": "pt1", 00:10:25.167 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.167 "is_configured": true, 00:10:25.167 "data_offset": 2048, 00:10:25.167 "data_size": 63488 00:10:25.167 }, 00:10:25.167 { 00:10:25.167 "name": null, 00:10:25.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.167 "is_configured": false, 00:10:25.167 "data_offset": 2048, 00:10:25.167 "data_size": 63488 00:10:25.167 }, 00:10:25.167 { 00:10:25.167 "name": null, 00:10:25.167 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.167 "is_configured": false, 00:10:25.167 "data_offset": 2048, 00:10:25.167 "data_size": 63488 00:10:25.167 }, 00:10:25.167 { 00:10:25.167 "name": null, 00:10:25.167 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:25.167 "is_configured": false, 00:10:25.167 "data_offset": 2048, 00:10:25.167 "data_size": 63488 00:10:25.167 } 00:10:25.167 ] 00:10:25.167 }' 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.167 04:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.428 [2024-12-14 04:59:36.224235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:25.428 [2024-12-14 04:59:36.224325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.428 [2024-12-14 04:59:36.224360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:25.428 [2024-12-14 04:59:36.224386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.428 [2024-12-14 04:59:36.224769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.428 [2024-12-14 04:59:36.224829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:25.428 [2024-12-14 04:59:36.224933] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:25.428 [2024-12-14 04:59:36.224993] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:25.428 pt2 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.428 [2024-12-14 04:59:36.236223] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.428 "name": "raid_bdev1", 00:10:25.428 "uuid": "f71e06ee-d3a4-4cee-8804-eccdc2011269", 00:10:25.428 "strip_size_kb": 0, 00:10:25.428 "state": "configuring", 00:10:25.428 "raid_level": "raid1", 00:10:25.428 "superblock": true, 00:10:25.428 "num_base_bdevs": 4, 00:10:25.428 "num_base_bdevs_discovered": 1, 00:10:25.428 "num_base_bdevs_operational": 4, 00:10:25.428 "base_bdevs_list": [ 00:10:25.428 { 00:10:25.428 "name": "pt1", 00:10:25.428 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.428 "is_configured": true, 00:10:25.428 "data_offset": 2048, 00:10:25.428 "data_size": 63488 00:10:25.428 }, 00:10:25.428 { 00:10:25.428 "name": null, 00:10:25.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.428 "is_configured": false, 00:10:25.428 "data_offset": 0, 00:10:25.428 "data_size": 63488 00:10:25.428 }, 00:10:25.428 { 00:10:25.428 "name": null, 00:10:25.428 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.428 "is_configured": false, 00:10:25.428 "data_offset": 2048, 00:10:25.428 "data_size": 63488 00:10:25.428 }, 00:10:25.428 { 00:10:25.428 "name": null, 00:10:25.428 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:25.428 "is_configured": false, 00:10:25.428 "data_offset": 2048, 00:10:25.428 "data_size": 63488 00:10:25.428 } 00:10:25.428 ] 00:10:25.428 }' 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.428 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.997 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:25.997 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:25.997 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:25.997 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.997 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.997 [2024-12-14 04:59:36.667478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:25.997 [2024-12-14 04:59:36.667582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.997 [2024-12-14 04:59:36.667616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:25.997 [2024-12-14 04:59:36.667645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.997 [2024-12-14 04:59:36.668043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.997 [2024-12-14 04:59:36.668113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:25.997 [2024-12-14 04:59:36.668239] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:25.997 [2024-12-14 04:59:36.668303] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:25.997 pt2 00:10:25.997 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.997 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:25.997 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:25.997 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:25.997 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.997 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.997 [2024-12-14 04:59:36.679411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:25.997 [2024-12-14 04:59:36.679516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.997 [2024-12-14 04:59:36.679549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:25.997 [2024-12-14 04:59:36.679581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.997 [2024-12-14 04:59:36.679935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.997 [2024-12-14 04:59:36.680002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:25.997 [2024-12-14 04:59:36.680101] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:25.997 [2024-12-14 04:59:36.680169] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:25.997 pt3 00:10:25.997 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.997 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:25.997 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:25.997 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:25.997 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.997 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.997 [2024-12-14 04:59:36.691401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:25.997 [2024-12-14 04:59:36.691447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.997 [2024-12-14 04:59:36.691459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:25.997 [2024-12-14 04:59:36.691468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.997 [2024-12-14 04:59:36.691751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.997 [2024-12-14 04:59:36.691774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:25.998 [2024-12-14 04:59:36.691821] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:25.998 [2024-12-14 04:59:36.691839] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:25.998 [2024-12-14 04:59:36.691951] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:25.998 [2024-12-14 04:59:36.691973] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:25.998 [2024-12-14 04:59:36.692239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:25.998 [2024-12-14 04:59:36.692370] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:25.998 [2024-12-14 04:59:36.692381] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:25.998 [2024-12-14 04:59:36.692477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.998 pt4 00:10:25.998 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.998 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:25.998 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:25.998 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:25.998 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.998 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.998 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.998 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.998 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.998 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.998 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.998 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.998 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.998 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.998 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.998 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.998 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.998 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.998 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.998 "name": "raid_bdev1", 00:10:25.998 "uuid": "f71e06ee-d3a4-4cee-8804-eccdc2011269", 00:10:25.998 "strip_size_kb": 0, 00:10:25.998 "state": "online", 00:10:25.998 "raid_level": "raid1", 00:10:25.998 "superblock": true, 00:10:25.998 "num_base_bdevs": 4, 00:10:25.998 "num_base_bdevs_discovered": 4, 00:10:25.998 "num_base_bdevs_operational": 4, 00:10:25.998 "base_bdevs_list": [ 00:10:25.998 { 00:10:25.998 "name": "pt1", 00:10:25.998 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.998 "is_configured": true, 00:10:25.998 "data_offset": 2048, 00:10:25.998 "data_size": 63488 00:10:25.998 }, 00:10:25.998 { 00:10:25.998 "name": "pt2", 00:10:25.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.998 "is_configured": true, 00:10:25.998 "data_offset": 2048, 00:10:25.998 "data_size": 63488 00:10:25.998 }, 00:10:25.998 { 00:10:25.998 "name": "pt3", 00:10:25.998 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.998 "is_configured": true, 00:10:25.998 "data_offset": 2048, 00:10:25.998 "data_size": 63488 00:10:25.998 }, 00:10:25.998 { 00:10:25.998 "name": "pt4", 00:10:25.998 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:25.998 "is_configured": true, 00:10:25.998 "data_offset": 2048, 00:10:25.998 "data_size": 63488 00:10:25.998 } 00:10:25.998 ] 00:10:25.998 }' 00:10:25.998 04:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.998 04:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.568 [2024-12-14 04:59:37.178891] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:26.568 "name": "raid_bdev1", 00:10:26.568 "aliases": [ 00:10:26.568 "f71e06ee-d3a4-4cee-8804-eccdc2011269" 00:10:26.568 ], 00:10:26.568 "product_name": "Raid Volume", 00:10:26.568 "block_size": 512, 00:10:26.568 "num_blocks": 63488, 00:10:26.568 "uuid": "f71e06ee-d3a4-4cee-8804-eccdc2011269", 00:10:26.568 "assigned_rate_limits": { 00:10:26.568 "rw_ios_per_sec": 0, 00:10:26.568 "rw_mbytes_per_sec": 0, 00:10:26.568 "r_mbytes_per_sec": 0, 00:10:26.568 "w_mbytes_per_sec": 0 00:10:26.568 }, 00:10:26.568 "claimed": false, 00:10:26.568 "zoned": false, 00:10:26.568 "supported_io_types": { 00:10:26.568 "read": true, 00:10:26.568 "write": true, 00:10:26.568 "unmap": false, 00:10:26.568 "flush": false, 00:10:26.568 "reset": true, 00:10:26.568 "nvme_admin": false, 00:10:26.568 "nvme_io": false, 00:10:26.568 "nvme_io_md": false, 00:10:26.568 "write_zeroes": true, 00:10:26.568 "zcopy": false, 00:10:26.568 "get_zone_info": false, 00:10:26.568 "zone_management": false, 00:10:26.568 "zone_append": false, 00:10:26.568 "compare": false, 00:10:26.568 "compare_and_write": false, 00:10:26.568 "abort": false, 00:10:26.568 "seek_hole": false, 00:10:26.568 "seek_data": false, 00:10:26.568 "copy": false, 00:10:26.568 "nvme_iov_md": false 00:10:26.568 }, 00:10:26.568 "memory_domains": [ 00:10:26.568 { 00:10:26.568 "dma_device_id": "system", 00:10:26.568 "dma_device_type": 1 00:10:26.568 }, 00:10:26.568 { 00:10:26.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.568 "dma_device_type": 2 00:10:26.568 }, 00:10:26.568 { 00:10:26.568 "dma_device_id": "system", 00:10:26.568 "dma_device_type": 1 00:10:26.568 }, 00:10:26.568 { 00:10:26.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.568 "dma_device_type": 2 00:10:26.568 }, 00:10:26.568 { 00:10:26.568 "dma_device_id": "system", 00:10:26.568 "dma_device_type": 1 00:10:26.568 }, 00:10:26.568 { 00:10:26.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.568 "dma_device_type": 2 00:10:26.568 }, 00:10:26.568 { 00:10:26.568 "dma_device_id": "system", 00:10:26.568 "dma_device_type": 1 00:10:26.568 }, 00:10:26.568 { 00:10:26.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.568 "dma_device_type": 2 00:10:26.568 } 00:10:26.568 ], 00:10:26.568 "driver_specific": { 00:10:26.568 "raid": { 00:10:26.568 "uuid": "f71e06ee-d3a4-4cee-8804-eccdc2011269", 00:10:26.568 "strip_size_kb": 0, 00:10:26.568 "state": "online", 00:10:26.568 "raid_level": "raid1", 00:10:26.568 "superblock": true, 00:10:26.568 "num_base_bdevs": 4, 00:10:26.568 "num_base_bdevs_discovered": 4, 00:10:26.568 "num_base_bdevs_operational": 4, 00:10:26.568 "base_bdevs_list": [ 00:10:26.568 { 00:10:26.568 "name": "pt1", 00:10:26.568 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:26.568 "is_configured": true, 00:10:26.568 "data_offset": 2048, 00:10:26.568 "data_size": 63488 00:10:26.568 }, 00:10:26.568 { 00:10:26.568 "name": "pt2", 00:10:26.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.568 "is_configured": true, 00:10:26.568 "data_offset": 2048, 00:10:26.568 "data_size": 63488 00:10:26.568 }, 00:10:26.568 { 00:10:26.568 "name": "pt3", 00:10:26.568 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.568 "is_configured": true, 00:10:26.568 "data_offset": 2048, 00:10:26.568 "data_size": 63488 00:10:26.568 }, 00:10:26.568 { 00:10:26.568 "name": "pt4", 00:10:26.568 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:26.568 "is_configured": true, 00:10:26.568 "data_offset": 2048, 00:10:26.568 "data_size": 63488 00:10:26.568 } 00:10:26.568 ] 00:10:26.568 } 00:10:26.568 } 00:10:26.568 }' 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:26.568 pt2 00:10:26.568 pt3 00:10:26.568 pt4' 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.568 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.828 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.828 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.828 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.828 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:26.828 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.828 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.828 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.828 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.828 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.828 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.828 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:26.828 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.829 [2024-12-14 04:59:37.518292] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f71e06ee-d3a4-4cee-8804-eccdc2011269 '!=' f71e06ee-d3a4-4cee-8804-eccdc2011269 ']' 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.829 [2024-12-14 04:59:37.565959] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.829 "name": "raid_bdev1", 00:10:26.829 "uuid": "f71e06ee-d3a4-4cee-8804-eccdc2011269", 00:10:26.829 "strip_size_kb": 0, 00:10:26.829 "state": "online", 00:10:26.829 "raid_level": "raid1", 00:10:26.829 "superblock": true, 00:10:26.829 "num_base_bdevs": 4, 00:10:26.829 "num_base_bdevs_discovered": 3, 00:10:26.829 "num_base_bdevs_operational": 3, 00:10:26.829 "base_bdevs_list": [ 00:10:26.829 { 00:10:26.829 "name": null, 00:10:26.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.829 "is_configured": false, 00:10:26.829 "data_offset": 0, 00:10:26.829 "data_size": 63488 00:10:26.829 }, 00:10:26.829 { 00:10:26.829 "name": "pt2", 00:10:26.829 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.829 "is_configured": true, 00:10:26.829 "data_offset": 2048, 00:10:26.829 "data_size": 63488 00:10:26.829 }, 00:10:26.829 { 00:10:26.829 "name": "pt3", 00:10:26.829 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.829 "is_configured": true, 00:10:26.829 "data_offset": 2048, 00:10:26.829 "data_size": 63488 00:10:26.829 }, 00:10:26.829 { 00:10:26.829 "name": "pt4", 00:10:26.829 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:26.829 "is_configured": true, 00:10:26.829 "data_offset": 2048, 00:10:26.829 "data_size": 63488 00:10:26.829 } 00:10:26.829 ] 00:10:26.829 }' 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.829 04:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.398 [2024-12-14 04:59:38.025104] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:27.398 [2024-12-14 04:59:38.025189] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.398 [2024-12-14 04:59:38.025283] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.398 [2024-12-14 04:59:38.025376] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.398 [2024-12-14 04:59:38.025471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.398 [2024-12-14 04:59:38.120931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:27.398 [2024-12-14 04:59:38.121042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.398 [2024-12-14 04:59:38.121064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:27.398 [2024-12-14 04:59:38.121075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.398 [2024-12-14 04:59:38.123183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.398 [2024-12-14 04:59:38.123227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:27.398 [2024-12-14 04:59:38.123295] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:27.398 [2024-12-14 04:59:38.123329] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:27.398 pt2 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.398 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.398 "name": "raid_bdev1", 00:10:27.398 "uuid": "f71e06ee-d3a4-4cee-8804-eccdc2011269", 00:10:27.398 "strip_size_kb": 0, 00:10:27.398 "state": "configuring", 00:10:27.399 "raid_level": "raid1", 00:10:27.399 "superblock": true, 00:10:27.399 "num_base_bdevs": 4, 00:10:27.399 "num_base_bdevs_discovered": 1, 00:10:27.399 "num_base_bdevs_operational": 3, 00:10:27.399 "base_bdevs_list": [ 00:10:27.399 { 00:10:27.399 "name": null, 00:10:27.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.399 "is_configured": false, 00:10:27.399 "data_offset": 2048, 00:10:27.399 "data_size": 63488 00:10:27.399 }, 00:10:27.399 { 00:10:27.399 "name": "pt2", 00:10:27.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.399 "is_configured": true, 00:10:27.399 "data_offset": 2048, 00:10:27.399 "data_size": 63488 00:10:27.399 }, 00:10:27.399 { 00:10:27.399 "name": null, 00:10:27.399 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.399 "is_configured": false, 00:10:27.399 "data_offset": 2048, 00:10:27.399 "data_size": 63488 00:10:27.399 }, 00:10:27.399 { 00:10:27.399 "name": null, 00:10:27.399 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:27.399 "is_configured": false, 00:10:27.399 "data_offset": 2048, 00:10:27.399 "data_size": 63488 00:10:27.399 } 00:10:27.399 ] 00:10:27.399 }' 00:10:27.399 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.399 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.658 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:27.658 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:27.658 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:27.658 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.658 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.658 [2024-12-14 04:59:38.532300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:27.658 [2024-12-14 04:59:38.532412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.658 [2024-12-14 04:59:38.532447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:10:27.658 [2024-12-14 04:59:38.532478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.658 [2024-12-14 04:59:38.532877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.658 [2024-12-14 04:59:38.532941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:27.658 [2024-12-14 04:59:38.533049] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:27.658 [2024-12-14 04:59:38.533106] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:27.658 pt3 00:10:27.658 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.658 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:27.658 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.658 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.658 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.658 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.918 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.918 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.918 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.918 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.918 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.918 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.918 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.918 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.918 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.918 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.918 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.918 "name": "raid_bdev1", 00:10:27.918 "uuid": "f71e06ee-d3a4-4cee-8804-eccdc2011269", 00:10:27.918 "strip_size_kb": 0, 00:10:27.918 "state": "configuring", 00:10:27.918 "raid_level": "raid1", 00:10:27.918 "superblock": true, 00:10:27.918 "num_base_bdevs": 4, 00:10:27.918 "num_base_bdevs_discovered": 2, 00:10:27.918 "num_base_bdevs_operational": 3, 00:10:27.918 "base_bdevs_list": [ 00:10:27.918 { 00:10:27.918 "name": null, 00:10:27.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.918 "is_configured": false, 00:10:27.918 "data_offset": 2048, 00:10:27.918 "data_size": 63488 00:10:27.918 }, 00:10:27.918 { 00:10:27.918 "name": "pt2", 00:10:27.918 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.918 "is_configured": true, 00:10:27.918 "data_offset": 2048, 00:10:27.918 "data_size": 63488 00:10:27.918 }, 00:10:27.918 { 00:10:27.918 "name": "pt3", 00:10:27.918 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.918 "is_configured": true, 00:10:27.918 "data_offset": 2048, 00:10:27.918 "data_size": 63488 00:10:27.918 }, 00:10:27.918 { 00:10:27.918 "name": null, 00:10:27.918 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:27.918 "is_configured": false, 00:10:27.918 "data_offset": 2048, 00:10:27.918 "data_size": 63488 00:10:27.918 } 00:10:27.918 ] 00:10:27.918 }' 00:10:27.918 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.918 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.178 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:28.178 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:28.178 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:10:28.178 04:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:28.178 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.178 04:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.178 [2024-12-14 04:59:39.003427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:28.178 [2024-12-14 04:59:39.003507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.178 [2024-12-14 04:59:39.003528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:10:28.178 [2024-12-14 04:59:39.003539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.178 [2024-12-14 04:59:39.003908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.178 [2024-12-14 04:59:39.003928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:28.178 [2024-12-14 04:59:39.003997] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:28.178 [2024-12-14 04:59:39.004027] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:28.178 [2024-12-14 04:59:39.004129] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:28.178 [2024-12-14 04:59:39.004140] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:28.178 [2024-12-14 04:59:39.004382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:28.178 [2024-12-14 04:59:39.004519] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:28.178 [2024-12-14 04:59:39.004538] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:10:28.178 [2024-12-14 04:59:39.004655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.178 pt4 00:10:28.178 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.178 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:28.179 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.179 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.179 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.179 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.179 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.179 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.179 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.179 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.179 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.179 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.179 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.179 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.179 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.179 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.179 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.179 "name": "raid_bdev1", 00:10:28.179 "uuid": "f71e06ee-d3a4-4cee-8804-eccdc2011269", 00:10:28.179 "strip_size_kb": 0, 00:10:28.179 "state": "online", 00:10:28.179 "raid_level": "raid1", 00:10:28.179 "superblock": true, 00:10:28.179 "num_base_bdevs": 4, 00:10:28.179 "num_base_bdevs_discovered": 3, 00:10:28.179 "num_base_bdevs_operational": 3, 00:10:28.179 "base_bdevs_list": [ 00:10:28.179 { 00:10:28.179 "name": null, 00:10:28.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.179 "is_configured": false, 00:10:28.179 "data_offset": 2048, 00:10:28.179 "data_size": 63488 00:10:28.179 }, 00:10:28.179 { 00:10:28.179 "name": "pt2", 00:10:28.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.179 "is_configured": true, 00:10:28.179 "data_offset": 2048, 00:10:28.179 "data_size": 63488 00:10:28.179 }, 00:10:28.179 { 00:10:28.179 "name": "pt3", 00:10:28.179 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:28.179 "is_configured": true, 00:10:28.179 "data_offset": 2048, 00:10:28.179 "data_size": 63488 00:10:28.179 }, 00:10:28.179 { 00:10:28.179 "name": "pt4", 00:10:28.179 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:28.179 "is_configured": true, 00:10:28.179 "data_offset": 2048, 00:10:28.179 "data_size": 63488 00:10:28.179 } 00:10:28.179 ] 00:10:28.179 }' 00:10:28.179 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.179 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.749 [2024-12-14 04:59:39.398818] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:28.749 [2024-12-14 04:59:39.398891] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:28.749 [2024-12-14 04:59:39.398972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.749 [2024-12-14 04:59:39.399067] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.749 [2024-12-14 04:59:39.399130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.749 [2024-12-14 04:59:39.470706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:28.749 [2024-12-14 04:59:39.470794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.749 [2024-12-14 04:59:39.470848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:10:28.749 [2024-12-14 04:59:39.470877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.749 [2024-12-14 04:59:39.472987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.749 [2024-12-14 04:59:39.473061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:28.749 [2024-12-14 04:59:39.473151] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:28.749 [2024-12-14 04:59:39.473253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:28.749 [2024-12-14 04:59:39.473407] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:28.749 [2024-12-14 04:59:39.473468] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:28.749 [2024-12-14 04:59:39.473521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:10:28.749 [2024-12-14 04:59:39.473607] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:28.749 [2024-12-14 04:59:39.473711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:28.749 pt1 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.749 "name": "raid_bdev1", 00:10:28.749 "uuid": "f71e06ee-d3a4-4cee-8804-eccdc2011269", 00:10:28.749 "strip_size_kb": 0, 00:10:28.749 "state": "configuring", 00:10:28.749 "raid_level": "raid1", 00:10:28.749 "superblock": true, 00:10:28.749 "num_base_bdevs": 4, 00:10:28.749 "num_base_bdevs_discovered": 2, 00:10:28.749 "num_base_bdevs_operational": 3, 00:10:28.749 "base_bdevs_list": [ 00:10:28.749 { 00:10:28.749 "name": null, 00:10:28.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.749 "is_configured": false, 00:10:28.749 "data_offset": 2048, 00:10:28.749 "data_size": 63488 00:10:28.749 }, 00:10:28.749 { 00:10:28.749 "name": "pt2", 00:10:28.749 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.749 "is_configured": true, 00:10:28.749 "data_offset": 2048, 00:10:28.749 "data_size": 63488 00:10:28.749 }, 00:10:28.749 { 00:10:28.749 "name": "pt3", 00:10:28.749 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:28.749 "is_configured": true, 00:10:28.749 "data_offset": 2048, 00:10:28.749 "data_size": 63488 00:10:28.749 }, 00:10:28.749 { 00:10:28.749 "name": null, 00:10:28.749 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:28.749 "is_configured": false, 00:10:28.749 "data_offset": 2048, 00:10:28.749 "data_size": 63488 00:10:28.749 } 00:10:28.749 ] 00:10:28.749 }' 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.749 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.318 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:29.318 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:29.318 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.318 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.318 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.318 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:29.318 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:29.318 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.318 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.318 [2024-12-14 04:59:39.993786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:29.318 [2024-12-14 04:59:39.993884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.318 [2024-12-14 04:59:39.993919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:10:29.319 [2024-12-14 04:59:39.993949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.319 [2024-12-14 04:59:39.994389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.319 [2024-12-14 04:59:39.994453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:29.319 [2024-12-14 04:59:39.994562] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:29.319 [2024-12-14 04:59:39.994596] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:29.319 [2024-12-14 04:59:39.994696] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:10:29.319 [2024-12-14 04:59:39.994710] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:29.319 [2024-12-14 04:59:39.994931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:29.319 [2024-12-14 04:59:39.995044] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:10:29.319 [2024-12-14 04:59:39.995052] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:10:29.319 [2024-12-14 04:59:39.995154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.319 pt4 00:10:29.319 04:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.319 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:29.319 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.319 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.319 04:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.319 04:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.319 04:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.319 04:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.319 04:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.319 04:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.319 04:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.319 04:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.319 04:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.319 04:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.319 04:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.319 04:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.319 04:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.319 "name": "raid_bdev1", 00:10:29.319 "uuid": "f71e06ee-d3a4-4cee-8804-eccdc2011269", 00:10:29.319 "strip_size_kb": 0, 00:10:29.319 "state": "online", 00:10:29.319 "raid_level": "raid1", 00:10:29.319 "superblock": true, 00:10:29.319 "num_base_bdevs": 4, 00:10:29.319 "num_base_bdevs_discovered": 3, 00:10:29.319 "num_base_bdevs_operational": 3, 00:10:29.319 "base_bdevs_list": [ 00:10:29.319 { 00:10:29.319 "name": null, 00:10:29.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.319 "is_configured": false, 00:10:29.319 "data_offset": 2048, 00:10:29.319 "data_size": 63488 00:10:29.319 }, 00:10:29.319 { 00:10:29.319 "name": "pt2", 00:10:29.319 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:29.319 "is_configured": true, 00:10:29.319 "data_offset": 2048, 00:10:29.319 "data_size": 63488 00:10:29.319 }, 00:10:29.319 { 00:10:29.319 "name": "pt3", 00:10:29.319 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:29.319 "is_configured": true, 00:10:29.319 "data_offset": 2048, 00:10:29.319 "data_size": 63488 00:10:29.319 }, 00:10:29.319 { 00:10:29.319 "name": "pt4", 00:10:29.319 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:29.319 "is_configured": true, 00:10:29.319 "data_offset": 2048, 00:10:29.319 "data_size": 63488 00:10:29.319 } 00:10:29.319 ] 00:10:29.319 }' 00:10:29.319 04:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.319 04:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.579 04:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:29.579 04:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:29.579 04:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.579 04:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.579 04:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.579 04:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:29.579 04:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:29.579 04:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:29.579 04:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.579 04:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.839 [2024-12-14 04:59:40.465247] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.839 04:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.839 04:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f71e06ee-d3a4-4cee-8804-eccdc2011269 '!=' f71e06ee-d3a4-4cee-8804-eccdc2011269 ']' 00:10:29.839 04:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85278 00:10:29.839 04:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 85278 ']' 00:10:29.839 04:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 85278 00:10:29.839 04:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:29.839 04:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.839 04:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85278 00:10:29.839 killing process with pid 85278 00:10:29.839 04:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:29.839 04:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:29.839 04:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85278' 00:10:29.839 04:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 85278 00:10:29.839 [2024-12-14 04:59:40.522228] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:29.839 [2024-12-14 04:59:40.522305] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.839 [2024-12-14 04:59:40.522378] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.839 [2024-12-14 04:59:40.522388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:10:29.839 04:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 85278 00:10:29.839 [2024-12-14 04:59:40.565477] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:30.104 04:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:30.104 00:10:30.104 real 0m7.068s 00:10:30.104 user 0m11.948s 00:10:30.104 sys 0m1.416s 00:10:30.104 ************************************ 00:10:30.104 END TEST raid_superblock_test 00:10:30.104 ************************************ 00:10:30.105 04:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.105 04:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.105 04:59:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:10:30.105 04:59:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:30.105 04:59:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.105 04:59:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:30.105 ************************************ 00:10:30.105 START TEST raid_read_error_test 00:10:30.105 ************************************ 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.r2afSHJpGu 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85754 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85754 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 85754 ']' 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:30.105 04:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.366 [2024-12-14 04:59:40.988704] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:30.366 [2024-12-14 04:59:40.988904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85754 ] 00:10:30.366 [2024-12-14 04:59:41.149066] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.366 [2024-12-14 04:59:41.195278] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.366 [2024-12-14 04:59:41.237247] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.366 [2024-12-14 04:59:41.237359] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.935 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:30.935 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:30.935 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.935 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:30.935 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.935 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.195 BaseBdev1_malloc 00:10:31.195 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.195 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:31.195 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.195 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.195 true 00:10:31.195 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.195 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:31.195 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.195 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.195 [2024-12-14 04:59:41.839501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:31.195 [2024-12-14 04:59:41.839603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.195 [2024-12-14 04:59:41.839640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:31.195 [2024-12-14 04:59:41.839667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.195 [2024-12-14 04:59:41.841792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.195 [2024-12-14 04:59:41.841879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:31.195 BaseBdev1 00:10:31.195 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.195 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.195 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:31.195 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.195 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.195 BaseBdev2_malloc 00:10:31.195 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.195 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:31.195 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.195 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.195 true 00:10:31.195 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.195 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:31.195 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.195 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.195 [2024-12-14 04:59:41.890056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:31.195 [2024-12-14 04:59:41.890157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.195 [2024-12-14 04:59:41.890191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:31.196 [2024-12-14 04:59:41.890201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.196 [2024-12-14 04:59:41.892229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.196 [2024-12-14 04:59:41.892264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:31.196 BaseBdev2 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.196 BaseBdev3_malloc 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.196 true 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.196 [2024-12-14 04:59:41.930501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:31.196 [2024-12-14 04:59:41.930545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.196 [2024-12-14 04:59:41.930578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:31.196 [2024-12-14 04:59:41.930586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.196 [2024-12-14 04:59:41.932628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.196 [2024-12-14 04:59:41.932662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:31.196 BaseBdev3 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.196 BaseBdev4_malloc 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.196 true 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.196 [2024-12-14 04:59:41.970959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:31.196 [2024-12-14 04:59:41.971003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.196 [2024-12-14 04:59:41.971039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:31.196 [2024-12-14 04:59:41.971046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.196 [2024-12-14 04:59:41.973093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.196 [2024-12-14 04:59:41.973129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:31.196 BaseBdev4 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.196 [2024-12-14 04:59:41.982980] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.196 [2024-12-14 04:59:41.984849] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.196 [2024-12-14 04:59:41.984932] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.196 [2024-12-14 04:59:41.984981] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:31.196 [2024-12-14 04:59:41.985199] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:31.196 [2024-12-14 04:59:41.985211] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:31.196 [2024-12-14 04:59:41.985481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:31.196 [2024-12-14 04:59:41.985634] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:31.196 [2024-12-14 04:59:41.985654] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:31.196 [2024-12-14 04:59:41.985819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.196 04:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.196 04:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.196 04:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.196 "name": "raid_bdev1", 00:10:31.196 "uuid": "e60c62b2-6a4b-43b2-9941-1137d1f4dace", 00:10:31.196 "strip_size_kb": 0, 00:10:31.196 "state": "online", 00:10:31.196 "raid_level": "raid1", 00:10:31.196 "superblock": true, 00:10:31.196 "num_base_bdevs": 4, 00:10:31.196 "num_base_bdevs_discovered": 4, 00:10:31.196 "num_base_bdevs_operational": 4, 00:10:31.196 "base_bdevs_list": [ 00:10:31.196 { 00:10:31.196 "name": "BaseBdev1", 00:10:31.196 "uuid": "ffb5f424-88f1-549b-b290-bd93d550565e", 00:10:31.196 "is_configured": true, 00:10:31.196 "data_offset": 2048, 00:10:31.196 "data_size": 63488 00:10:31.196 }, 00:10:31.196 { 00:10:31.196 "name": "BaseBdev2", 00:10:31.196 "uuid": "b9ffe50f-3ec5-5de9-8969-275247e86b5b", 00:10:31.196 "is_configured": true, 00:10:31.196 "data_offset": 2048, 00:10:31.196 "data_size": 63488 00:10:31.196 }, 00:10:31.196 { 00:10:31.196 "name": "BaseBdev3", 00:10:31.196 "uuid": "11f49b4b-1546-5f55-9e14-0eb9ede549ee", 00:10:31.196 "is_configured": true, 00:10:31.196 "data_offset": 2048, 00:10:31.196 "data_size": 63488 00:10:31.196 }, 00:10:31.196 { 00:10:31.196 "name": "BaseBdev4", 00:10:31.196 "uuid": "ca249556-9816-54cd-92ad-71b760123808", 00:10:31.196 "is_configured": true, 00:10:31.196 "data_offset": 2048, 00:10:31.196 "data_size": 63488 00:10:31.196 } 00:10:31.196 ] 00:10:31.196 }' 00:10:31.196 04:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.196 04:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.765 04:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:31.765 04:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:31.765 [2024-12-14 04:59:42.530380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:32.704 04:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:32.704 04:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.704 04:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.704 04:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.704 04:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:32.704 04:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:32.704 04:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:32.704 04:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:32.704 04:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:32.705 04:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.705 04:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.705 04:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.705 04:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.705 04:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.705 04:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.705 04:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.705 04:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.705 04:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.705 04:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.705 04:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.705 04:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.705 04:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.705 04:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.705 04:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.705 "name": "raid_bdev1", 00:10:32.705 "uuid": "e60c62b2-6a4b-43b2-9941-1137d1f4dace", 00:10:32.705 "strip_size_kb": 0, 00:10:32.705 "state": "online", 00:10:32.705 "raid_level": "raid1", 00:10:32.705 "superblock": true, 00:10:32.705 "num_base_bdevs": 4, 00:10:32.705 "num_base_bdevs_discovered": 4, 00:10:32.705 "num_base_bdevs_operational": 4, 00:10:32.705 "base_bdevs_list": [ 00:10:32.705 { 00:10:32.705 "name": "BaseBdev1", 00:10:32.705 "uuid": "ffb5f424-88f1-549b-b290-bd93d550565e", 00:10:32.705 "is_configured": true, 00:10:32.705 "data_offset": 2048, 00:10:32.705 "data_size": 63488 00:10:32.705 }, 00:10:32.705 { 00:10:32.705 "name": "BaseBdev2", 00:10:32.705 "uuid": "b9ffe50f-3ec5-5de9-8969-275247e86b5b", 00:10:32.705 "is_configured": true, 00:10:32.705 "data_offset": 2048, 00:10:32.705 "data_size": 63488 00:10:32.705 }, 00:10:32.705 { 00:10:32.705 "name": "BaseBdev3", 00:10:32.705 "uuid": "11f49b4b-1546-5f55-9e14-0eb9ede549ee", 00:10:32.705 "is_configured": true, 00:10:32.705 "data_offset": 2048, 00:10:32.705 "data_size": 63488 00:10:32.705 }, 00:10:32.705 { 00:10:32.705 "name": "BaseBdev4", 00:10:32.705 "uuid": "ca249556-9816-54cd-92ad-71b760123808", 00:10:32.705 "is_configured": true, 00:10:32.705 "data_offset": 2048, 00:10:32.705 "data_size": 63488 00:10:32.705 } 00:10:32.705 ] 00:10:32.705 }' 00:10:32.705 04:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.705 04:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.275 04:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:33.275 04:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.275 04:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.275 [2024-12-14 04:59:43.925824] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:33.275 [2024-12-14 04:59:43.925922] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.275 [2024-12-14 04:59:43.928508] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.275 [2024-12-14 04:59:43.928613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.275 [2024-12-14 04:59:43.928773] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.275 [2024-12-14 04:59:43.928863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:33.275 04:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.275 { 00:10:33.275 "results": [ 00:10:33.275 { 00:10:33.275 "job": "raid_bdev1", 00:10:33.275 "core_mask": "0x1", 00:10:33.275 "workload": "randrw", 00:10:33.275 "percentage": 50, 00:10:33.275 "status": "finished", 00:10:33.275 "queue_depth": 1, 00:10:33.275 "io_size": 131072, 00:10:33.275 "runtime": 1.396546, 00:10:33.275 "iops": 11932.295821261885, 00:10:33.275 "mibps": 1491.5369776577356, 00:10:33.275 "io_failed": 0, 00:10:33.275 "io_timeout": 0, 00:10:33.275 "avg_latency_us": 81.36923105950227, 00:10:33.275 "min_latency_us": 21.687336244541484, 00:10:33.275 "max_latency_us": 1345.0620087336245 00:10:33.275 } 00:10:33.275 ], 00:10:33.275 "core_count": 1 00:10:33.275 } 00:10:33.275 04:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85754 00:10:33.275 04:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 85754 ']' 00:10:33.275 04:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 85754 00:10:33.275 04:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:33.275 04:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:33.275 04:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85754 00:10:33.275 04:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:33.275 04:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:33.275 04:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85754' 00:10:33.275 killing process with pid 85754 00:10:33.275 04:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 85754 00:10:33.275 [2024-12-14 04:59:43.974421] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.275 04:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 85754 00:10:33.275 [2024-12-14 04:59:44.009578] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:33.535 04:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.r2afSHJpGu 00:10:33.535 04:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:33.535 04:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:33.535 04:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:33.535 04:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:33.535 04:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:33.535 04:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:33.535 04:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:33.535 00:10:33.535 real 0m3.369s 00:10:33.535 user 0m4.264s 00:10:33.535 sys 0m0.547s 00:10:33.535 ************************************ 00:10:33.535 END TEST raid_read_error_test 00:10:33.535 ************************************ 00:10:33.535 04:59:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.535 04:59:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.535 04:59:44 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:10:33.535 04:59:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:33.535 04:59:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.535 04:59:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:33.535 ************************************ 00:10:33.535 START TEST raid_write_error_test 00:10:33.535 ************************************ 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pcPangFiwZ 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85883 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85883 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 85883 ']' 00:10:33.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:33.535 04:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.536 04:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:33.536 04:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.795 [2024-12-14 04:59:44.435494] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:33.795 [2024-12-14 04:59:44.435614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85883 ] 00:10:33.795 [2024-12-14 04:59:44.585570] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.795 [2024-12-14 04:59:44.631536] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.795 [2024-12-14 04:59:44.673959] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.795 [2024-12-14 04:59:44.674002] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.734 BaseBdev1_malloc 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.734 true 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.734 [2024-12-14 04:59:45.284122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:34.734 [2024-12-14 04:59:45.284250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.734 [2024-12-14 04:59:45.284276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:34.734 [2024-12-14 04:59:45.284293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.734 [2024-12-14 04:59:45.286351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.734 [2024-12-14 04:59:45.286387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:34.734 BaseBdev1 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.734 BaseBdev2_malloc 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.734 true 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.734 [2024-12-14 04:59:45.341392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:34.734 [2024-12-14 04:59:45.341514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.734 [2024-12-14 04:59:45.341565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:34.734 [2024-12-14 04:59:45.341626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.734 [2024-12-14 04:59:45.344510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.734 [2024-12-14 04:59:45.344602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:34.734 BaseBdev2 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:34.734 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.735 BaseBdev3_malloc 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.735 true 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.735 [2024-12-14 04:59:45.381816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:34.735 [2024-12-14 04:59:45.381915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.735 [2024-12-14 04:59:45.381950] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:34.735 [2024-12-14 04:59:45.381977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.735 [2024-12-14 04:59:45.383995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.735 [2024-12-14 04:59:45.384065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:34.735 BaseBdev3 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.735 BaseBdev4_malloc 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.735 true 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.735 [2024-12-14 04:59:45.422366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:34.735 [2024-12-14 04:59:45.422447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.735 [2024-12-14 04:59:45.422501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:34.735 [2024-12-14 04:59:45.422529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.735 [2024-12-14 04:59:45.424531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.735 [2024-12-14 04:59:45.424597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:34.735 BaseBdev4 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.735 [2024-12-14 04:59:45.434397] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.735 [2024-12-14 04:59:45.436317] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.735 [2024-12-14 04:59:45.436445] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:34.735 [2024-12-14 04:59:45.436534] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:34.735 [2024-12-14 04:59:45.436778] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:34.735 [2024-12-14 04:59:45.436830] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:34.735 [2024-12-14 04:59:45.437105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:34.735 [2024-12-14 04:59:45.437313] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:34.735 [2024-12-14 04:59:45.437365] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:34.735 [2024-12-14 04:59:45.437553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.735 "name": "raid_bdev1", 00:10:34.735 "uuid": "559ea49a-c199-4a24-8e8e-b7fde9eccbf3", 00:10:34.735 "strip_size_kb": 0, 00:10:34.735 "state": "online", 00:10:34.735 "raid_level": "raid1", 00:10:34.735 "superblock": true, 00:10:34.735 "num_base_bdevs": 4, 00:10:34.735 "num_base_bdevs_discovered": 4, 00:10:34.735 "num_base_bdevs_operational": 4, 00:10:34.735 "base_bdevs_list": [ 00:10:34.735 { 00:10:34.735 "name": "BaseBdev1", 00:10:34.735 "uuid": "94f7f23a-98c2-5272-b6b5-b76460b46c4a", 00:10:34.735 "is_configured": true, 00:10:34.735 "data_offset": 2048, 00:10:34.735 "data_size": 63488 00:10:34.735 }, 00:10:34.735 { 00:10:34.735 "name": "BaseBdev2", 00:10:34.735 "uuid": "885098aa-9232-5abb-9cd5-a671011c9abc", 00:10:34.735 "is_configured": true, 00:10:34.735 "data_offset": 2048, 00:10:34.735 "data_size": 63488 00:10:34.735 }, 00:10:34.735 { 00:10:34.735 "name": "BaseBdev3", 00:10:34.735 "uuid": "b6ed77d5-948f-5538-b3b0-3158672e0c18", 00:10:34.735 "is_configured": true, 00:10:34.735 "data_offset": 2048, 00:10:34.735 "data_size": 63488 00:10:34.735 }, 00:10:34.735 { 00:10:34.735 "name": "BaseBdev4", 00:10:34.735 "uuid": "689db0ae-2fda-5143-b19b-13633fbff2b2", 00:10:34.735 "is_configured": true, 00:10:34.735 "data_offset": 2048, 00:10:34.735 "data_size": 63488 00:10:34.735 } 00:10:34.735 ] 00:10:34.735 }' 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.735 04:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.304 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:35.304 04:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:35.304 [2024-12-14 04:59:45.965808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.244 [2024-12-14 04:59:46.888484] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:36.244 [2024-12-14 04:59:46.888540] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:36.244 [2024-12-14 04:59:46.888779] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.244 "name": "raid_bdev1", 00:10:36.244 "uuid": "559ea49a-c199-4a24-8e8e-b7fde9eccbf3", 00:10:36.244 "strip_size_kb": 0, 00:10:36.244 "state": "online", 00:10:36.244 "raid_level": "raid1", 00:10:36.244 "superblock": true, 00:10:36.244 "num_base_bdevs": 4, 00:10:36.244 "num_base_bdevs_discovered": 3, 00:10:36.244 "num_base_bdevs_operational": 3, 00:10:36.244 "base_bdevs_list": [ 00:10:36.244 { 00:10:36.244 "name": null, 00:10:36.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.244 "is_configured": false, 00:10:36.244 "data_offset": 0, 00:10:36.244 "data_size": 63488 00:10:36.244 }, 00:10:36.244 { 00:10:36.244 "name": "BaseBdev2", 00:10:36.244 "uuid": "885098aa-9232-5abb-9cd5-a671011c9abc", 00:10:36.244 "is_configured": true, 00:10:36.244 "data_offset": 2048, 00:10:36.244 "data_size": 63488 00:10:36.244 }, 00:10:36.244 { 00:10:36.244 "name": "BaseBdev3", 00:10:36.244 "uuid": "b6ed77d5-948f-5538-b3b0-3158672e0c18", 00:10:36.244 "is_configured": true, 00:10:36.244 "data_offset": 2048, 00:10:36.244 "data_size": 63488 00:10:36.244 }, 00:10:36.244 { 00:10:36.244 "name": "BaseBdev4", 00:10:36.244 "uuid": "689db0ae-2fda-5143-b19b-13633fbff2b2", 00:10:36.244 "is_configured": true, 00:10:36.244 "data_offset": 2048, 00:10:36.244 "data_size": 63488 00:10:36.244 } 00:10:36.244 ] 00:10:36.244 }' 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.244 04:59:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.504 04:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:36.504 04:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.504 04:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.504 [2024-12-14 04:59:47.352180] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:36.504 [2024-12-14 04:59:47.352265] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.504 [2024-12-14 04:59:47.354684] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.504 [2024-12-14 04:59:47.354773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.504 [2024-12-14 04:59:47.354912] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.504 [2024-12-14 04:59:47.354975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:36.504 { 00:10:36.504 "results": [ 00:10:36.504 { 00:10:36.504 "job": "raid_bdev1", 00:10:36.505 "core_mask": "0x1", 00:10:36.505 "workload": "randrw", 00:10:36.505 "percentage": 50, 00:10:36.505 "status": "finished", 00:10:36.505 "queue_depth": 1, 00:10:36.505 "io_size": 131072, 00:10:36.505 "runtime": 1.387249, 00:10:36.505 "iops": 12878.726169562926, 00:10:36.505 "mibps": 1609.8407711953657, 00:10:36.505 "io_failed": 0, 00:10:36.505 "io_timeout": 0, 00:10:36.505 "avg_latency_us": 75.17546333525121, 00:10:36.505 "min_latency_us": 21.910917030567685, 00:10:36.505 "max_latency_us": 1445.2262008733624 00:10:36.505 } 00:10:36.505 ], 00:10:36.505 "core_count": 1 00:10:36.505 } 00:10:36.505 04:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.505 04:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85883 00:10:36.505 04:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 85883 ']' 00:10:36.505 04:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 85883 00:10:36.505 04:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:36.505 04:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:36.505 04:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85883 00:10:36.765 04:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:36.765 04:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:36.765 killing process with pid 85883 00:10:36.765 04:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85883' 00:10:36.765 04:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 85883 00:10:36.765 [2024-12-14 04:59:47.398893] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:36.765 04:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 85883 00:10:36.765 [2024-12-14 04:59:47.433833] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:37.030 04:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:37.030 04:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pcPangFiwZ 00:10:37.030 04:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:37.030 04:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:37.030 04:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:37.030 04:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:37.030 04:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:37.030 04:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:37.030 ************************************ 00:10:37.030 END TEST raid_write_error_test 00:10:37.030 ************************************ 00:10:37.030 00:10:37.030 real 0m3.349s 00:10:37.030 user 0m4.159s 00:10:37.030 sys 0m0.599s 00:10:37.030 04:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.030 04:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.030 04:59:47 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:10:37.030 04:59:47 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:10:37.030 04:59:47 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:10:37.030 04:59:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:37.030 04:59:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.030 04:59:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:37.030 ************************************ 00:10:37.030 START TEST raid_rebuild_test 00:10:37.030 ************************************ 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=86016 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 86016 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 86016 ']' 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:37.030 04:59:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.030 [2024-12-14 04:59:47.852892] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:37.030 [2024-12-14 04:59:47.853092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86016 ] 00:10:37.030 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:37.031 Zero copy mechanism will not be used. 00:10:37.305 [2024-12-14 04:59:48.012898] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.305 [2024-12-14 04:59:48.058082] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.305 [2024-12-14 04:59:48.099896] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.305 [2024-12-14 04:59:48.100013] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.890 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:37.890 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:10:37.890 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:37.890 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:37.890 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.890 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.890 BaseBdev1_malloc 00:10:37.890 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.891 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:37.891 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.891 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.891 [2024-12-14 04:59:48.694414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:37.891 [2024-12-14 04:59:48.694540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.891 [2024-12-14 04:59:48.694587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:37.891 [2024-12-14 04:59:48.694623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.891 [2024-12-14 04:59:48.696749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.891 [2024-12-14 04:59:48.696819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:37.891 BaseBdev1 00:10:37.891 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.891 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:37.891 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:37.891 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.891 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.891 BaseBdev2_malloc 00:10:37.891 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.891 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:37.891 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.891 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.891 [2024-12-14 04:59:48.736313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:37.891 [2024-12-14 04:59:48.736498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.891 [2024-12-14 04:59:48.736601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:37.891 [2024-12-14 04:59:48.736705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.891 [2024-12-14 04:59:48.741187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.891 [2024-12-14 04:59:48.741328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:37.891 BaseBdev2 00:10:37.891 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.891 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:37.891 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.891 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.891 spare_malloc 00:10:37.891 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.891 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:37.891 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.891 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.891 spare_delay 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.151 [2024-12-14 04:59:48.779310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:38.151 [2024-12-14 04:59:48.779401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.151 [2024-12-14 04:59:48.779468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:38.151 [2024-12-14 04:59:48.779484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.151 [2024-12-14 04:59:48.781577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.151 [2024-12-14 04:59:48.781623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:38.151 spare 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.151 [2024-12-14 04:59:48.791337] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.151 [2024-12-14 04:59:48.793120] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.151 [2024-12-14 04:59:48.793269] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:38.151 [2024-12-14 04:59:48.793334] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:38.151 [2024-12-14 04:59:48.793635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:38.151 [2024-12-14 04:59:48.793813] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:38.151 [2024-12-14 04:59:48.793867] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:38.151 [2024-12-14 04:59:48.794043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.151 "name": "raid_bdev1", 00:10:38.151 "uuid": "0e4dc79a-fcda-4474-9f74-2a4cbdf9263e", 00:10:38.151 "strip_size_kb": 0, 00:10:38.151 "state": "online", 00:10:38.151 "raid_level": "raid1", 00:10:38.151 "superblock": false, 00:10:38.151 "num_base_bdevs": 2, 00:10:38.151 "num_base_bdevs_discovered": 2, 00:10:38.151 "num_base_bdevs_operational": 2, 00:10:38.151 "base_bdevs_list": [ 00:10:38.151 { 00:10:38.151 "name": "BaseBdev1", 00:10:38.151 "uuid": "6b22bd0f-fadf-5d82-b218-2a55973ac3da", 00:10:38.151 "is_configured": true, 00:10:38.151 "data_offset": 0, 00:10:38.151 "data_size": 65536 00:10:38.151 }, 00:10:38.151 { 00:10:38.151 "name": "BaseBdev2", 00:10:38.151 "uuid": "5f548349-963d-56d8-b2b1-710619c9b11f", 00:10:38.151 "is_configured": true, 00:10:38.151 "data_offset": 0, 00:10:38.151 "data_size": 65536 00:10:38.151 } 00:10:38.151 ] 00:10:38.151 }' 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.151 04:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.411 04:59:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:38.411 04:59:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:38.411 04:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.411 04:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.411 [2024-12-14 04:59:49.274719] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.671 04:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.671 04:59:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:10:38.671 04:59:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.671 04:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.671 04:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.671 04:59:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:38.671 04:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.671 04:59:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:10:38.671 04:59:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:10:38.671 04:59:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:10:38.671 04:59:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:10:38.671 04:59:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:10:38.671 04:59:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:38.671 04:59:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:10:38.671 04:59:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:38.671 04:59:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:38.671 04:59:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:38.671 04:59:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:38.671 04:59:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:38.671 04:59:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:38.671 04:59:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:10:38.671 [2024-12-14 04:59:49.522107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:38.671 /dev/nbd0 00:10:38.931 04:59:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:38.931 04:59:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:38.931 04:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:38.931 04:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:10:38.931 04:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:38.931 04:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:38.931 04:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:38.931 04:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:10:38.931 04:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:38.931 04:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:38.931 04:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:38.931 1+0 records in 00:10:38.931 1+0 records out 00:10:38.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606896 s, 6.7 MB/s 00:10:38.931 04:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.931 04:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:10:38.931 04:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.931 04:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:38.931 04:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:10:38.931 04:59:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:38.931 04:59:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:38.931 04:59:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:10:38.931 04:59:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:10:38.931 04:59:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:10:43.128 65536+0 records in 00:10:43.128 65536+0 records out 00:10:43.128 33554432 bytes (34 MB, 32 MiB) copied, 3.64731 s, 9.2 MB/s 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:43.128 [2024-12-14 04:59:53.432351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.128 [2024-12-14 04:59:53.460409] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.128 "name": "raid_bdev1", 00:10:43.128 "uuid": "0e4dc79a-fcda-4474-9f74-2a4cbdf9263e", 00:10:43.128 "strip_size_kb": 0, 00:10:43.128 "state": "online", 00:10:43.128 "raid_level": "raid1", 00:10:43.128 "superblock": false, 00:10:43.128 "num_base_bdevs": 2, 00:10:43.128 "num_base_bdevs_discovered": 1, 00:10:43.128 "num_base_bdevs_operational": 1, 00:10:43.128 "base_bdevs_list": [ 00:10:43.128 { 00:10:43.128 "name": null, 00:10:43.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.128 "is_configured": false, 00:10:43.128 "data_offset": 0, 00:10:43.128 "data_size": 65536 00:10:43.128 }, 00:10:43.128 { 00:10:43.128 "name": "BaseBdev2", 00:10:43.128 "uuid": "5f548349-963d-56d8-b2b1-710619c9b11f", 00:10:43.128 "is_configured": true, 00:10:43.128 "data_offset": 0, 00:10:43.128 "data_size": 65536 00:10:43.128 } 00:10:43.128 ] 00:10:43.128 }' 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.128 [2024-12-14 04:59:53.895672] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:43.128 [2024-12-14 04:59:53.899946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09a30 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.128 04:59:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:43.128 [2024-12-14 04:59:53.901846] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:44.067 04:59:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:44.067 04:59:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:44.067 04:59:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:44.067 04:59:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:44.067 04:59:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:44.067 04:59:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.067 04:59:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.067 04:59:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.067 04:59:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.067 04:59:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.327 04:59:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:44.327 "name": "raid_bdev1", 00:10:44.327 "uuid": "0e4dc79a-fcda-4474-9f74-2a4cbdf9263e", 00:10:44.327 "strip_size_kb": 0, 00:10:44.327 "state": "online", 00:10:44.327 "raid_level": "raid1", 00:10:44.327 "superblock": false, 00:10:44.327 "num_base_bdevs": 2, 00:10:44.327 "num_base_bdevs_discovered": 2, 00:10:44.327 "num_base_bdevs_operational": 2, 00:10:44.327 "process": { 00:10:44.327 "type": "rebuild", 00:10:44.327 "target": "spare", 00:10:44.327 "progress": { 00:10:44.327 "blocks": 20480, 00:10:44.327 "percent": 31 00:10:44.327 } 00:10:44.327 }, 00:10:44.327 "base_bdevs_list": [ 00:10:44.327 { 00:10:44.327 "name": "spare", 00:10:44.327 "uuid": "f1ebda02-9a7c-53ce-81a8-f25aa0ec0675", 00:10:44.327 "is_configured": true, 00:10:44.327 "data_offset": 0, 00:10:44.327 "data_size": 65536 00:10:44.327 }, 00:10:44.327 { 00:10:44.327 "name": "BaseBdev2", 00:10:44.327 "uuid": "5f548349-963d-56d8-b2b1-710619c9b11f", 00:10:44.327 "is_configured": true, 00:10:44.327 "data_offset": 0, 00:10:44.327 "data_size": 65536 00:10:44.327 } 00:10:44.327 ] 00:10:44.327 }' 00:10:44.327 04:59:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:44.327 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:44.327 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:44.327 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:44.327 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:44.328 04:59:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.328 04:59:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.328 [2024-12-14 04:59:55.058609] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:44.328 [2024-12-14 04:59:55.106437] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:44.328 [2024-12-14 04:59:55.106588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.328 [2024-12-14 04:59:55.106632] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:44.328 [2024-12-14 04:59:55.106663] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:44.328 04:59:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.328 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:44.328 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.328 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.328 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.328 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.328 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:44.328 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.328 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.328 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.328 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.328 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.328 04:59:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.328 04:59:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.328 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.328 04:59:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.328 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.328 "name": "raid_bdev1", 00:10:44.328 "uuid": "0e4dc79a-fcda-4474-9f74-2a4cbdf9263e", 00:10:44.328 "strip_size_kb": 0, 00:10:44.328 "state": "online", 00:10:44.328 "raid_level": "raid1", 00:10:44.328 "superblock": false, 00:10:44.328 "num_base_bdevs": 2, 00:10:44.328 "num_base_bdevs_discovered": 1, 00:10:44.328 "num_base_bdevs_operational": 1, 00:10:44.328 "base_bdevs_list": [ 00:10:44.328 { 00:10:44.328 "name": null, 00:10:44.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.328 "is_configured": false, 00:10:44.328 "data_offset": 0, 00:10:44.328 "data_size": 65536 00:10:44.328 }, 00:10:44.328 { 00:10:44.328 "name": "BaseBdev2", 00:10:44.328 "uuid": "5f548349-963d-56d8-b2b1-710619c9b11f", 00:10:44.328 "is_configured": true, 00:10:44.328 "data_offset": 0, 00:10:44.328 "data_size": 65536 00:10:44.328 } 00:10:44.328 ] 00:10:44.328 }' 00:10:44.328 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.328 04:59:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.896 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:44.896 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:44.896 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:44.896 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:44.896 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:44.896 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.896 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.896 04:59:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.896 04:59:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.896 04:59:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.896 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:44.896 "name": "raid_bdev1", 00:10:44.896 "uuid": "0e4dc79a-fcda-4474-9f74-2a4cbdf9263e", 00:10:44.896 "strip_size_kb": 0, 00:10:44.896 "state": "online", 00:10:44.896 "raid_level": "raid1", 00:10:44.896 "superblock": false, 00:10:44.896 "num_base_bdevs": 2, 00:10:44.896 "num_base_bdevs_discovered": 1, 00:10:44.896 "num_base_bdevs_operational": 1, 00:10:44.896 "base_bdevs_list": [ 00:10:44.896 { 00:10:44.896 "name": null, 00:10:44.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.896 "is_configured": false, 00:10:44.896 "data_offset": 0, 00:10:44.896 "data_size": 65536 00:10:44.896 }, 00:10:44.896 { 00:10:44.896 "name": "BaseBdev2", 00:10:44.896 "uuid": "5f548349-963d-56d8-b2b1-710619c9b11f", 00:10:44.896 "is_configured": true, 00:10:44.896 "data_offset": 0, 00:10:44.896 "data_size": 65536 00:10:44.896 } 00:10:44.896 ] 00:10:44.896 }' 00:10:44.896 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:44.897 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:44.897 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:44.897 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:44.897 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:44.897 04:59:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.897 04:59:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.897 [2024-12-14 04:59:55.674164] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:44.897 [2024-12-14 04:59:55.678138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09b00 00:10:44.897 04:59:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.897 04:59:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:44.897 [2024-12-14 04:59:55.679915] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:45.835 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:45.835 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:45.835 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:45.835 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:45.835 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:45.835 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.835 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.835 04:59:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.835 04:59:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.835 04:59:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.095 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:46.095 "name": "raid_bdev1", 00:10:46.095 "uuid": "0e4dc79a-fcda-4474-9f74-2a4cbdf9263e", 00:10:46.095 "strip_size_kb": 0, 00:10:46.095 "state": "online", 00:10:46.095 "raid_level": "raid1", 00:10:46.095 "superblock": false, 00:10:46.095 "num_base_bdevs": 2, 00:10:46.095 "num_base_bdevs_discovered": 2, 00:10:46.095 "num_base_bdevs_operational": 2, 00:10:46.095 "process": { 00:10:46.095 "type": "rebuild", 00:10:46.095 "target": "spare", 00:10:46.095 "progress": { 00:10:46.095 "blocks": 20480, 00:10:46.095 "percent": 31 00:10:46.095 } 00:10:46.095 }, 00:10:46.095 "base_bdevs_list": [ 00:10:46.095 { 00:10:46.095 "name": "spare", 00:10:46.095 "uuid": "f1ebda02-9a7c-53ce-81a8-f25aa0ec0675", 00:10:46.095 "is_configured": true, 00:10:46.095 "data_offset": 0, 00:10:46.095 "data_size": 65536 00:10:46.095 }, 00:10:46.095 { 00:10:46.095 "name": "BaseBdev2", 00:10:46.095 "uuid": "5f548349-963d-56d8-b2b1-710619c9b11f", 00:10:46.095 "is_configured": true, 00:10:46.095 "data_offset": 0, 00:10:46.095 "data_size": 65536 00:10:46.095 } 00:10:46.095 ] 00:10:46.095 }' 00:10:46.095 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:46.095 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:46.095 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:46.095 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:46.095 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:10:46.095 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:46.096 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:46.096 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:46.096 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=287 00:10:46.096 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:46.096 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:46.096 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:46.096 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:46.096 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:46.096 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:46.096 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.096 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.096 04:59:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.096 04:59:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.096 04:59:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.096 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:46.096 "name": "raid_bdev1", 00:10:46.096 "uuid": "0e4dc79a-fcda-4474-9f74-2a4cbdf9263e", 00:10:46.096 "strip_size_kb": 0, 00:10:46.096 "state": "online", 00:10:46.096 "raid_level": "raid1", 00:10:46.096 "superblock": false, 00:10:46.096 "num_base_bdevs": 2, 00:10:46.096 "num_base_bdevs_discovered": 2, 00:10:46.096 "num_base_bdevs_operational": 2, 00:10:46.096 "process": { 00:10:46.096 "type": "rebuild", 00:10:46.096 "target": "spare", 00:10:46.096 "progress": { 00:10:46.096 "blocks": 22528, 00:10:46.096 "percent": 34 00:10:46.096 } 00:10:46.096 }, 00:10:46.096 "base_bdevs_list": [ 00:10:46.096 { 00:10:46.096 "name": "spare", 00:10:46.096 "uuid": "f1ebda02-9a7c-53ce-81a8-f25aa0ec0675", 00:10:46.096 "is_configured": true, 00:10:46.096 "data_offset": 0, 00:10:46.096 "data_size": 65536 00:10:46.096 }, 00:10:46.096 { 00:10:46.096 "name": "BaseBdev2", 00:10:46.096 "uuid": "5f548349-963d-56d8-b2b1-710619c9b11f", 00:10:46.096 "is_configured": true, 00:10:46.096 "data_offset": 0, 00:10:46.096 "data_size": 65536 00:10:46.096 } 00:10:46.096 ] 00:10:46.096 }' 00:10:46.096 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:46.096 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:46.096 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:46.096 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:46.096 04:59:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:47.477 04:59:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:47.477 04:59:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:47.477 04:59:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:47.477 04:59:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:47.477 04:59:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:47.477 04:59:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:47.477 04:59:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.477 04:59:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.477 04:59:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.477 04:59:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.477 04:59:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.477 04:59:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:47.477 "name": "raid_bdev1", 00:10:47.477 "uuid": "0e4dc79a-fcda-4474-9f74-2a4cbdf9263e", 00:10:47.477 "strip_size_kb": 0, 00:10:47.477 "state": "online", 00:10:47.477 "raid_level": "raid1", 00:10:47.477 "superblock": false, 00:10:47.477 "num_base_bdevs": 2, 00:10:47.477 "num_base_bdevs_discovered": 2, 00:10:47.477 "num_base_bdevs_operational": 2, 00:10:47.477 "process": { 00:10:47.477 "type": "rebuild", 00:10:47.477 "target": "spare", 00:10:47.477 "progress": { 00:10:47.477 "blocks": 45056, 00:10:47.477 "percent": 68 00:10:47.477 } 00:10:47.477 }, 00:10:47.477 "base_bdevs_list": [ 00:10:47.477 { 00:10:47.477 "name": "spare", 00:10:47.477 "uuid": "f1ebda02-9a7c-53ce-81a8-f25aa0ec0675", 00:10:47.477 "is_configured": true, 00:10:47.477 "data_offset": 0, 00:10:47.477 "data_size": 65536 00:10:47.477 }, 00:10:47.477 { 00:10:47.477 "name": "BaseBdev2", 00:10:47.477 "uuid": "5f548349-963d-56d8-b2b1-710619c9b11f", 00:10:47.477 "is_configured": true, 00:10:47.477 "data_offset": 0, 00:10:47.477 "data_size": 65536 00:10:47.477 } 00:10:47.477 ] 00:10:47.477 }' 00:10:47.477 04:59:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:47.477 04:59:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:47.477 04:59:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:47.477 04:59:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:47.477 04:59:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:48.046 [2024-12-14 04:59:58.890395] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:48.046 [2024-12-14 04:59:58.890536] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:48.046 [2024-12-14 04:59:58.890597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.305 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:48.305 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:48.305 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:48.305 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:48.305 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:48.305 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:48.305 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.305 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.305 04:59:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.305 04:59:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.305 04:59:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.305 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:48.305 "name": "raid_bdev1", 00:10:48.305 "uuid": "0e4dc79a-fcda-4474-9f74-2a4cbdf9263e", 00:10:48.305 "strip_size_kb": 0, 00:10:48.305 "state": "online", 00:10:48.306 "raid_level": "raid1", 00:10:48.306 "superblock": false, 00:10:48.306 "num_base_bdevs": 2, 00:10:48.306 "num_base_bdevs_discovered": 2, 00:10:48.306 "num_base_bdevs_operational": 2, 00:10:48.306 "base_bdevs_list": [ 00:10:48.306 { 00:10:48.306 "name": "spare", 00:10:48.306 "uuid": "f1ebda02-9a7c-53ce-81a8-f25aa0ec0675", 00:10:48.306 "is_configured": true, 00:10:48.306 "data_offset": 0, 00:10:48.306 "data_size": 65536 00:10:48.306 }, 00:10:48.306 { 00:10:48.306 "name": "BaseBdev2", 00:10:48.306 "uuid": "5f548349-963d-56d8-b2b1-710619c9b11f", 00:10:48.306 "is_configured": true, 00:10:48.306 "data_offset": 0, 00:10:48.306 "data_size": 65536 00:10:48.306 } 00:10:48.306 ] 00:10:48.306 }' 00:10:48.306 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:48.306 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:48.565 "name": "raid_bdev1", 00:10:48.565 "uuid": "0e4dc79a-fcda-4474-9f74-2a4cbdf9263e", 00:10:48.565 "strip_size_kb": 0, 00:10:48.565 "state": "online", 00:10:48.565 "raid_level": "raid1", 00:10:48.565 "superblock": false, 00:10:48.565 "num_base_bdevs": 2, 00:10:48.565 "num_base_bdevs_discovered": 2, 00:10:48.565 "num_base_bdevs_operational": 2, 00:10:48.565 "base_bdevs_list": [ 00:10:48.565 { 00:10:48.565 "name": "spare", 00:10:48.565 "uuid": "f1ebda02-9a7c-53ce-81a8-f25aa0ec0675", 00:10:48.565 "is_configured": true, 00:10:48.565 "data_offset": 0, 00:10:48.565 "data_size": 65536 00:10:48.565 }, 00:10:48.565 { 00:10:48.565 "name": "BaseBdev2", 00:10:48.565 "uuid": "5f548349-963d-56d8-b2b1-710619c9b11f", 00:10:48.565 "is_configured": true, 00:10:48.565 "data_offset": 0, 00:10:48.565 "data_size": 65536 00:10:48.565 } 00:10:48.565 ] 00:10:48.565 }' 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.565 "name": "raid_bdev1", 00:10:48.565 "uuid": "0e4dc79a-fcda-4474-9f74-2a4cbdf9263e", 00:10:48.565 "strip_size_kb": 0, 00:10:48.565 "state": "online", 00:10:48.565 "raid_level": "raid1", 00:10:48.565 "superblock": false, 00:10:48.565 "num_base_bdevs": 2, 00:10:48.565 "num_base_bdevs_discovered": 2, 00:10:48.565 "num_base_bdevs_operational": 2, 00:10:48.565 "base_bdevs_list": [ 00:10:48.565 { 00:10:48.565 "name": "spare", 00:10:48.565 "uuid": "f1ebda02-9a7c-53ce-81a8-f25aa0ec0675", 00:10:48.565 "is_configured": true, 00:10:48.565 "data_offset": 0, 00:10:48.565 "data_size": 65536 00:10:48.565 }, 00:10:48.565 { 00:10:48.565 "name": "BaseBdev2", 00:10:48.565 "uuid": "5f548349-963d-56d8-b2b1-710619c9b11f", 00:10:48.565 "is_configured": true, 00:10:48.565 "data_offset": 0, 00:10:48.565 "data_size": 65536 00:10:48.565 } 00:10:48.565 ] 00:10:48.565 }' 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.565 04:59:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.135 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:49.135 04:59:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.135 04:59:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.135 [2024-12-14 04:59:59.809111] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:49.135 [2024-12-14 04:59:59.809197] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.135 [2024-12-14 04:59:59.809318] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.135 [2024-12-14 04:59:59.809423] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.135 [2024-12-14 04:59:59.809490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:49.135 04:59:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.135 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.135 04:59:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.135 04:59:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.135 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:10:49.135 04:59:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.135 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:49.135 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:49.135 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:10:49.135 04:59:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:10:49.135 04:59:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:49.135 04:59:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:10:49.135 04:59:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:49.135 04:59:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:49.135 04:59:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:49.135 04:59:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:49.135 04:59:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:49.135 04:59:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:49.135 04:59:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:10:49.395 /dev/nbd0 00:10:49.395 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:49.395 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:49.395 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:49.395 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:10:49.395 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:49.395 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:49.395 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:49.395 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:10:49.395 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:49.395 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:49.395 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:49.395 1+0 records in 00:10:49.395 1+0 records out 00:10:49.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560244 s, 7.3 MB/s 00:10:49.395 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:49.395 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:10:49.395 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:49.395 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:49.395 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:10:49.395 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:49.395 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:49.395 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:10:49.395 /dev/nbd1 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:49.654 1+0 records in 00:10:49.654 1+0 records out 00:10:49.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285011 s, 14.4 MB/s 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:49.654 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:49.914 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:49.914 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:49.914 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:49.914 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:49.914 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:49.914 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:49.914 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:49.914 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:49.914 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:49.914 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:50.173 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:50.173 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:50.173 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:50.173 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:50.173 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:50.173 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:50.173 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:50.173 05:00:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:50.173 05:00:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:10:50.173 05:00:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 86016 00:10:50.173 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 86016 ']' 00:10:50.173 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 86016 00:10:50.173 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:10:50.173 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:50.173 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86016 00:10:50.173 killing process with pid 86016 00:10:50.173 Received shutdown signal, test time was about 60.000000 seconds 00:10:50.173 00:10:50.173 Latency(us) 00:10:50.173 [2024-12-14T05:00:01.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.173 [2024-12-14T05:00:01.056Z] =================================================================================================================== 00:10:50.173 [2024-12-14T05:00:01.056Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:50.173 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:50.173 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:50.173 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86016' 00:10:50.173 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 86016 00:10:50.173 [2024-12-14 05:00:00.856720] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:50.173 05:00:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 86016 00:10:50.173 [2024-12-14 05:00:00.887730] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:10:50.433 00:10:50.433 real 0m13.366s 00:10:50.433 user 0m15.409s 00:10:50.433 sys 0m2.828s 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.433 ************************************ 00:10:50.433 END TEST raid_rebuild_test 00:10:50.433 ************************************ 00:10:50.433 05:00:01 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:10:50.433 05:00:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:50.433 05:00:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:50.433 05:00:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.433 ************************************ 00:10:50.433 START TEST raid_rebuild_test_sb 00:10:50.433 ************************************ 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86411 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86411 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 86411 ']' 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:50.433 05:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.433 [2024-12-14 05:00:01.292358] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:50.433 [2024-12-14 05:00:01.292587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:10:50.433 Zero copy mechanism will not be used. 00:10:50.433 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86411 ] 00:10:50.692 [2024-12-14 05:00:01.452625] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.692 [2024-12-14 05:00:01.500891] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.692 [2024-12-14 05:00:01.542834] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.692 [2024-12-14 05:00:01.542947] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.260 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:51.260 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:51.260 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:51.260 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:51.260 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.260 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.260 BaseBdev1_malloc 00:10:51.260 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.260 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:51.260 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.260 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.260 [2024-12-14 05:00:02.137063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:51.260 [2024-12-14 05:00:02.137192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.260 [2024-12-14 05:00:02.137242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:51.260 [2024-12-14 05:00:02.137292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.260 [2024-12-14 05:00:02.139477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.260 [2024-12-14 05:00:02.139553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:51.260 BaseBdev1 00:10:51.520 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.520 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:51.520 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:51.520 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.520 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.520 BaseBdev2_malloc 00:10:51.520 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.520 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:51.520 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.520 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.520 [2024-12-14 05:00:02.186081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:51.520 [2024-12-14 05:00:02.186420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.520 [2024-12-14 05:00:02.186586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:51.520 [2024-12-14 05:00:02.186679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.520 [2024-12-14 05:00:02.191286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.520 [2024-12-14 05:00:02.191430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:51.520 BaseBdev2 00:10:51.520 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.520 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.521 spare_malloc 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.521 spare_delay 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.521 [2024-12-14 05:00:02.229304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:51.521 [2024-12-14 05:00:02.229407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.521 [2024-12-14 05:00:02.229446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:51.521 [2024-12-14 05:00:02.229480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.521 [2024-12-14 05:00:02.231543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.521 [2024-12-14 05:00:02.231629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:51.521 spare 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.521 [2024-12-14 05:00:02.241314] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:51.521 [2024-12-14 05:00:02.243122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:51.521 [2024-12-14 05:00:02.243361] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:51.521 [2024-12-14 05:00:02.243412] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:51.521 [2024-12-14 05:00:02.243685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:51.521 [2024-12-14 05:00:02.243869] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:51.521 [2024-12-14 05:00:02.243921] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:51.521 [2024-12-14 05:00:02.244114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.521 "name": "raid_bdev1", 00:10:51.521 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:10:51.521 "strip_size_kb": 0, 00:10:51.521 "state": "online", 00:10:51.521 "raid_level": "raid1", 00:10:51.521 "superblock": true, 00:10:51.521 "num_base_bdevs": 2, 00:10:51.521 "num_base_bdevs_discovered": 2, 00:10:51.521 "num_base_bdevs_operational": 2, 00:10:51.521 "base_bdevs_list": [ 00:10:51.521 { 00:10:51.521 "name": "BaseBdev1", 00:10:51.521 "uuid": "a318a782-4ec3-5d8d-a1be-beb234956f4a", 00:10:51.521 "is_configured": true, 00:10:51.521 "data_offset": 2048, 00:10:51.521 "data_size": 63488 00:10:51.521 }, 00:10:51.521 { 00:10:51.521 "name": "BaseBdev2", 00:10:51.521 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:10:51.521 "is_configured": true, 00:10:51.521 "data_offset": 2048, 00:10:51.521 "data_size": 63488 00:10:51.521 } 00:10:51.521 ] 00:10:51.521 }' 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.521 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:52.091 [2024-12-14 05:00:02.696808] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:10:52.091 [2024-12-14 05:00:02.932204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:52.091 /dev/nbd0 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:52.091 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:52.351 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:52.351 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:10:52.351 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:52.351 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:52.351 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:52.351 1+0 records in 00:10:52.351 1+0 records out 00:10:52.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587274 s, 7.0 MB/s 00:10:52.351 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:52.351 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:10:52.351 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:52.351 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:52.351 05:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:10:52.351 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:52.351 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:52.351 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:10:52.351 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:10:52.351 05:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:10:55.675 63488+0 records in 00:10:55.675 63488+0 records out 00:10:55.675 32505856 bytes (33 MB, 31 MiB) copied, 3.46276 s, 9.4 MB/s 00:10:55.675 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:55.675 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:55.675 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:55.675 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:55.675 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:10:55.675 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:55.675 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:55.935 [2024-12-14 05:00:06.673582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.935 [2024-12-14 05:00:06.686548] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.935 "name": "raid_bdev1", 00:10:55.935 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:10:55.935 "strip_size_kb": 0, 00:10:55.935 "state": "online", 00:10:55.935 "raid_level": "raid1", 00:10:55.935 "superblock": true, 00:10:55.935 "num_base_bdevs": 2, 00:10:55.935 "num_base_bdevs_discovered": 1, 00:10:55.935 "num_base_bdevs_operational": 1, 00:10:55.935 "base_bdevs_list": [ 00:10:55.935 { 00:10:55.935 "name": null, 00:10:55.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.935 "is_configured": false, 00:10:55.935 "data_offset": 0, 00:10:55.935 "data_size": 63488 00:10:55.935 }, 00:10:55.935 { 00:10:55.935 "name": "BaseBdev2", 00:10:55.935 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:10:55.935 "is_configured": true, 00:10:55.935 "data_offset": 2048, 00:10:55.935 "data_size": 63488 00:10:55.935 } 00:10:55.935 ] 00:10:55.935 }' 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.935 05:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.505 05:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:56.505 05:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.505 05:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.505 [2024-12-14 05:00:07.133804] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:56.505 [2024-12-14 05:00:07.138035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:10:56.505 05:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.505 [2024-12-14 05:00:07.139934] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:56.505 05:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:57.444 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:57.444 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:57.444 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:57.444 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:57.444 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:57.444 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.444 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.444 05:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.444 05:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.444 05:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.444 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:57.444 "name": "raid_bdev1", 00:10:57.444 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:10:57.444 "strip_size_kb": 0, 00:10:57.444 "state": "online", 00:10:57.444 "raid_level": "raid1", 00:10:57.444 "superblock": true, 00:10:57.444 "num_base_bdevs": 2, 00:10:57.444 "num_base_bdevs_discovered": 2, 00:10:57.444 "num_base_bdevs_operational": 2, 00:10:57.444 "process": { 00:10:57.444 "type": "rebuild", 00:10:57.444 "target": "spare", 00:10:57.444 "progress": { 00:10:57.444 "blocks": 20480, 00:10:57.444 "percent": 32 00:10:57.444 } 00:10:57.444 }, 00:10:57.444 "base_bdevs_list": [ 00:10:57.444 { 00:10:57.444 "name": "spare", 00:10:57.444 "uuid": "0444b599-866b-521b-9a21-4856c3cbd92a", 00:10:57.444 "is_configured": true, 00:10:57.444 "data_offset": 2048, 00:10:57.444 "data_size": 63488 00:10:57.444 }, 00:10:57.444 { 00:10:57.444 "name": "BaseBdev2", 00:10:57.444 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:10:57.444 "is_configured": true, 00:10:57.444 "data_offset": 2048, 00:10:57.444 "data_size": 63488 00:10:57.444 } 00:10:57.444 ] 00:10:57.444 }' 00:10:57.445 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:57.445 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:57.445 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:57.445 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:57.445 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:57.445 05:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.445 05:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.445 [2024-12-14 05:00:08.304576] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:57.704 [2024-12-14 05:00:08.344483] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:57.704 [2024-12-14 05:00:08.344594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.704 [2024-12-14 05:00:08.344651] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:57.704 [2024-12-14 05:00:08.344673] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:57.704 05:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.704 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:57.704 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.704 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.704 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.704 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.704 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:57.704 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.704 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.704 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.704 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.704 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.704 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.704 05:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.704 05:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.704 05:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.705 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.705 "name": "raid_bdev1", 00:10:57.705 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:10:57.705 "strip_size_kb": 0, 00:10:57.705 "state": "online", 00:10:57.705 "raid_level": "raid1", 00:10:57.705 "superblock": true, 00:10:57.705 "num_base_bdevs": 2, 00:10:57.705 "num_base_bdevs_discovered": 1, 00:10:57.705 "num_base_bdevs_operational": 1, 00:10:57.705 "base_bdevs_list": [ 00:10:57.705 { 00:10:57.705 "name": null, 00:10:57.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.705 "is_configured": false, 00:10:57.705 "data_offset": 0, 00:10:57.705 "data_size": 63488 00:10:57.705 }, 00:10:57.705 { 00:10:57.705 "name": "BaseBdev2", 00:10:57.705 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:10:57.705 "is_configured": true, 00:10:57.705 "data_offset": 2048, 00:10:57.705 "data_size": 63488 00:10:57.705 } 00:10:57.705 ] 00:10:57.705 }' 00:10:57.705 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.705 05:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.964 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:57.964 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:57.964 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:57.964 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:57.964 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:57.964 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.964 05:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.964 05:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.964 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.964 05:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.224 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:58.224 "name": "raid_bdev1", 00:10:58.224 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:10:58.224 "strip_size_kb": 0, 00:10:58.224 "state": "online", 00:10:58.224 "raid_level": "raid1", 00:10:58.224 "superblock": true, 00:10:58.224 "num_base_bdevs": 2, 00:10:58.224 "num_base_bdevs_discovered": 1, 00:10:58.224 "num_base_bdevs_operational": 1, 00:10:58.224 "base_bdevs_list": [ 00:10:58.224 { 00:10:58.224 "name": null, 00:10:58.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.224 "is_configured": false, 00:10:58.224 "data_offset": 0, 00:10:58.224 "data_size": 63488 00:10:58.224 }, 00:10:58.224 { 00:10:58.224 "name": "BaseBdev2", 00:10:58.224 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:10:58.224 "is_configured": true, 00:10:58.224 "data_offset": 2048, 00:10:58.224 "data_size": 63488 00:10:58.224 } 00:10:58.224 ] 00:10:58.224 }' 00:10:58.224 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:58.224 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:58.224 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:58.224 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:58.224 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:58.224 05:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.224 05:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.224 [2024-12-14 05:00:08.908371] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:58.224 [2024-12-14 05:00:08.912359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3290 00:10:58.224 05:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.224 05:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:58.224 [2024-12-14 05:00:08.914215] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:59.163 05:00:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:59.163 05:00:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:59.163 05:00:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:59.163 05:00:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:59.163 05:00:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:59.163 05:00:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.163 05:00:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.163 05:00:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.163 05:00:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.163 05:00:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.163 05:00:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:59.163 "name": "raid_bdev1", 00:10:59.163 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:10:59.163 "strip_size_kb": 0, 00:10:59.163 "state": "online", 00:10:59.163 "raid_level": "raid1", 00:10:59.163 "superblock": true, 00:10:59.163 "num_base_bdevs": 2, 00:10:59.163 "num_base_bdevs_discovered": 2, 00:10:59.163 "num_base_bdevs_operational": 2, 00:10:59.163 "process": { 00:10:59.163 "type": "rebuild", 00:10:59.163 "target": "spare", 00:10:59.163 "progress": { 00:10:59.163 "blocks": 20480, 00:10:59.163 "percent": 32 00:10:59.163 } 00:10:59.163 }, 00:10:59.163 "base_bdevs_list": [ 00:10:59.163 { 00:10:59.163 "name": "spare", 00:10:59.163 "uuid": "0444b599-866b-521b-9a21-4856c3cbd92a", 00:10:59.163 "is_configured": true, 00:10:59.163 "data_offset": 2048, 00:10:59.163 "data_size": 63488 00:10:59.163 }, 00:10:59.163 { 00:10:59.163 "name": "BaseBdev2", 00:10:59.163 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:10:59.163 "is_configured": true, 00:10:59.163 "data_offset": 2048, 00:10:59.163 "data_size": 63488 00:10:59.163 } 00:10:59.163 ] 00:10:59.163 }' 00:10:59.163 05:00:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:59.163 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:59.163 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:10:59.423 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=301 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:59.423 "name": "raid_bdev1", 00:10:59.423 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:10:59.423 "strip_size_kb": 0, 00:10:59.423 "state": "online", 00:10:59.423 "raid_level": "raid1", 00:10:59.423 "superblock": true, 00:10:59.423 "num_base_bdevs": 2, 00:10:59.423 "num_base_bdevs_discovered": 2, 00:10:59.423 "num_base_bdevs_operational": 2, 00:10:59.423 "process": { 00:10:59.423 "type": "rebuild", 00:10:59.423 "target": "spare", 00:10:59.423 "progress": { 00:10:59.423 "blocks": 22528, 00:10:59.423 "percent": 35 00:10:59.423 } 00:10:59.423 }, 00:10:59.423 "base_bdevs_list": [ 00:10:59.423 { 00:10:59.423 "name": "spare", 00:10:59.423 "uuid": "0444b599-866b-521b-9a21-4856c3cbd92a", 00:10:59.423 "is_configured": true, 00:10:59.423 "data_offset": 2048, 00:10:59.423 "data_size": 63488 00:10:59.423 }, 00:10:59.423 { 00:10:59.423 "name": "BaseBdev2", 00:10:59.423 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:10:59.423 "is_configured": true, 00:10:59.423 "data_offset": 2048, 00:10:59.423 "data_size": 63488 00:10:59.423 } 00:10:59.423 ] 00:10:59.423 }' 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:59.423 05:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:00.362 05:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:00.362 05:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:00.362 05:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:00.362 05:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:00.362 05:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:00.362 05:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:00.362 05:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.362 05:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.362 05:00:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.362 05:00:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.362 05:00:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.622 05:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:00.622 "name": "raid_bdev1", 00:11:00.622 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:11:00.622 "strip_size_kb": 0, 00:11:00.622 "state": "online", 00:11:00.622 "raid_level": "raid1", 00:11:00.622 "superblock": true, 00:11:00.622 "num_base_bdevs": 2, 00:11:00.622 "num_base_bdevs_discovered": 2, 00:11:00.622 "num_base_bdevs_operational": 2, 00:11:00.622 "process": { 00:11:00.622 "type": "rebuild", 00:11:00.622 "target": "spare", 00:11:00.622 "progress": { 00:11:00.622 "blocks": 45056, 00:11:00.622 "percent": 70 00:11:00.622 } 00:11:00.622 }, 00:11:00.622 "base_bdevs_list": [ 00:11:00.622 { 00:11:00.622 "name": "spare", 00:11:00.622 "uuid": "0444b599-866b-521b-9a21-4856c3cbd92a", 00:11:00.622 "is_configured": true, 00:11:00.622 "data_offset": 2048, 00:11:00.622 "data_size": 63488 00:11:00.622 }, 00:11:00.622 { 00:11:00.622 "name": "BaseBdev2", 00:11:00.622 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:11:00.622 "is_configured": true, 00:11:00.622 "data_offset": 2048, 00:11:00.622 "data_size": 63488 00:11:00.622 } 00:11:00.622 ] 00:11:00.622 }' 00:11:00.622 05:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:00.622 05:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:00.622 05:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:00.622 05:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:00.622 05:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:01.191 [2024-12-14 05:00:12.024663] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:01.191 [2024-12-14 05:00:12.024810] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:01.191 [2024-12-14 05:00:12.024951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.451 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:01.451 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:01.451 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:01.451 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:01.451 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:01.451 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:01.710 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.710 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.710 05:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.710 05:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.710 05:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.710 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:01.710 "name": "raid_bdev1", 00:11:01.710 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:11:01.710 "strip_size_kb": 0, 00:11:01.710 "state": "online", 00:11:01.710 "raid_level": "raid1", 00:11:01.710 "superblock": true, 00:11:01.711 "num_base_bdevs": 2, 00:11:01.711 "num_base_bdevs_discovered": 2, 00:11:01.711 "num_base_bdevs_operational": 2, 00:11:01.711 "base_bdevs_list": [ 00:11:01.711 { 00:11:01.711 "name": "spare", 00:11:01.711 "uuid": "0444b599-866b-521b-9a21-4856c3cbd92a", 00:11:01.711 "is_configured": true, 00:11:01.711 "data_offset": 2048, 00:11:01.711 "data_size": 63488 00:11:01.711 }, 00:11:01.711 { 00:11:01.711 "name": "BaseBdev2", 00:11:01.711 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:11:01.711 "is_configured": true, 00:11:01.711 "data_offset": 2048, 00:11:01.711 "data_size": 63488 00:11:01.711 } 00:11:01.711 ] 00:11:01.711 }' 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:01.711 "name": "raid_bdev1", 00:11:01.711 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:11:01.711 "strip_size_kb": 0, 00:11:01.711 "state": "online", 00:11:01.711 "raid_level": "raid1", 00:11:01.711 "superblock": true, 00:11:01.711 "num_base_bdevs": 2, 00:11:01.711 "num_base_bdevs_discovered": 2, 00:11:01.711 "num_base_bdevs_operational": 2, 00:11:01.711 "base_bdevs_list": [ 00:11:01.711 { 00:11:01.711 "name": "spare", 00:11:01.711 "uuid": "0444b599-866b-521b-9a21-4856c3cbd92a", 00:11:01.711 "is_configured": true, 00:11:01.711 "data_offset": 2048, 00:11:01.711 "data_size": 63488 00:11:01.711 }, 00:11:01.711 { 00:11:01.711 "name": "BaseBdev2", 00:11:01.711 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:11:01.711 "is_configured": true, 00:11:01.711 "data_offset": 2048, 00:11:01.711 "data_size": 63488 00:11:01.711 } 00:11:01.711 ] 00:11:01.711 }' 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.711 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.971 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.971 05:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.971 05:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.971 05:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.971 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.971 "name": "raid_bdev1", 00:11:01.971 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:11:01.971 "strip_size_kb": 0, 00:11:01.971 "state": "online", 00:11:01.971 "raid_level": "raid1", 00:11:01.971 "superblock": true, 00:11:01.971 "num_base_bdevs": 2, 00:11:01.971 "num_base_bdevs_discovered": 2, 00:11:01.971 "num_base_bdevs_operational": 2, 00:11:01.971 "base_bdevs_list": [ 00:11:01.971 { 00:11:01.971 "name": "spare", 00:11:01.971 "uuid": "0444b599-866b-521b-9a21-4856c3cbd92a", 00:11:01.971 "is_configured": true, 00:11:01.971 "data_offset": 2048, 00:11:01.971 "data_size": 63488 00:11:01.971 }, 00:11:01.971 { 00:11:01.971 "name": "BaseBdev2", 00:11:01.971 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:11:01.971 "is_configured": true, 00:11:01.971 "data_offset": 2048, 00:11:01.971 "data_size": 63488 00:11:01.971 } 00:11:01.971 ] 00:11:01.971 }' 00:11:01.971 05:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.971 05:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.231 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:02.231 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.231 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.231 [2024-12-14 05:00:13.015376] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:02.231 [2024-12-14 05:00:13.015443] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.231 [2024-12-14 05:00:13.015547] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.231 [2024-12-14 05:00:13.015638] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.231 [2024-12-14 05:00:13.015695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:02.231 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.231 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.231 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.231 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.231 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:11:02.231 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.231 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:02.231 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:02.231 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:02.231 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:02.231 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:02.231 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:02.231 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:02.231 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:02.231 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:02.231 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:02.231 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:02.231 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:02.231 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:02.491 /dev/nbd0 00:11:02.491 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:02.491 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:02.491 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:02.491 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:02.491 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:02.491 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:02.491 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:02.491 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:02.491 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:02.491 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:02.491 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:02.491 1+0 records in 00:11:02.491 1+0 records out 00:11:02.491 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381052 s, 10.7 MB/s 00:11:02.491 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:02.491 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:02.491 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:02.491 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:02.491 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:02.491 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:02.491 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:02.491 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:02.751 /dev/nbd1 00:11:02.751 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:02.751 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:02.751 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:02.751 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:02.751 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:02.751 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:02.751 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:02.751 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:02.751 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:02.751 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:02.751 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:02.751 1+0 records in 00:11:02.751 1+0 records out 00:11:02.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405291 s, 10.1 MB/s 00:11:02.751 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:02.751 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:02.751 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:02.751 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:02.751 05:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:02.751 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:02.751 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:02.751 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:02.751 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:02.751 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:02.751 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:02.752 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:02.752 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:02.752 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:02.752 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:03.011 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:03.011 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:03.011 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:03.011 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:03.011 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:03.011 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:03.011 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:03.011 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:03.012 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:03.012 05:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:03.271 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.272 [2024-12-14 05:00:14.034032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:03.272 [2024-12-14 05:00:14.034142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.272 [2024-12-14 05:00:14.034215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:03.272 [2024-12-14 05:00:14.034265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.272 [2024-12-14 05:00:14.036453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.272 [2024-12-14 05:00:14.036530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:03.272 [2024-12-14 05:00:14.036689] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:03.272 [2024-12-14 05:00:14.036783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:03.272 [2024-12-14 05:00:14.036962] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.272 spare 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.272 [2024-12-14 05:00:14.136917] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:11:03.272 [2024-12-14 05:00:14.136987] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:03.272 [2024-12-14 05:00:14.137315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1940 00:11:03.272 [2024-12-14 05:00:14.137531] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:11:03.272 [2024-12-14 05:00:14.137583] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:11:03.272 [2024-12-14 05:00:14.137785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.272 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.532 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.532 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.532 "name": "raid_bdev1", 00:11:03.532 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:11:03.532 "strip_size_kb": 0, 00:11:03.532 "state": "online", 00:11:03.532 "raid_level": "raid1", 00:11:03.532 "superblock": true, 00:11:03.532 "num_base_bdevs": 2, 00:11:03.532 "num_base_bdevs_discovered": 2, 00:11:03.532 "num_base_bdevs_operational": 2, 00:11:03.532 "base_bdevs_list": [ 00:11:03.532 { 00:11:03.532 "name": "spare", 00:11:03.532 "uuid": "0444b599-866b-521b-9a21-4856c3cbd92a", 00:11:03.532 "is_configured": true, 00:11:03.532 "data_offset": 2048, 00:11:03.532 "data_size": 63488 00:11:03.532 }, 00:11:03.532 { 00:11:03.532 "name": "BaseBdev2", 00:11:03.532 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:11:03.532 "is_configured": true, 00:11:03.532 "data_offset": 2048, 00:11:03.532 "data_size": 63488 00:11:03.532 } 00:11:03.532 ] 00:11:03.532 }' 00:11:03.532 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.532 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.791 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:03.791 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:03.791 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:03.791 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:03.791 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:03.791 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.791 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.791 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.791 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.791 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.791 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:03.791 "name": "raid_bdev1", 00:11:03.791 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:11:03.791 "strip_size_kb": 0, 00:11:03.791 "state": "online", 00:11:03.791 "raid_level": "raid1", 00:11:03.791 "superblock": true, 00:11:03.791 "num_base_bdevs": 2, 00:11:03.791 "num_base_bdevs_discovered": 2, 00:11:03.791 "num_base_bdevs_operational": 2, 00:11:03.791 "base_bdevs_list": [ 00:11:03.791 { 00:11:03.791 "name": "spare", 00:11:03.791 "uuid": "0444b599-866b-521b-9a21-4856c3cbd92a", 00:11:03.791 "is_configured": true, 00:11:03.791 "data_offset": 2048, 00:11:03.791 "data_size": 63488 00:11:03.791 }, 00:11:03.791 { 00:11:03.791 "name": "BaseBdev2", 00:11:03.791 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:11:03.791 "is_configured": true, 00:11:03.791 "data_offset": 2048, 00:11:03.791 "data_size": 63488 00:11:03.791 } 00:11:03.791 ] 00:11:03.791 }' 00:11:03.791 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:03.791 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:03.791 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:03.791 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:03.791 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.791 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:03.791 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.791 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.791 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.051 [2024-12-14 05:00:14.704872] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.051 "name": "raid_bdev1", 00:11:04.051 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:11:04.051 "strip_size_kb": 0, 00:11:04.051 "state": "online", 00:11:04.051 "raid_level": "raid1", 00:11:04.051 "superblock": true, 00:11:04.051 "num_base_bdevs": 2, 00:11:04.051 "num_base_bdevs_discovered": 1, 00:11:04.051 "num_base_bdevs_operational": 1, 00:11:04.051 "base_bdevs_list": [ 00:11:04.051 { 00:11:04.051 "name": null, 00:11:04.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.051 "is_configured": false, 00:11:04.051 "data_offset": 0, 00:11:04.051 "data_size": 63488 00:11:04.051 }, 00:11:04.051 { 00:11:04.051 "name": "BaseBdev2", 00:11:04.051 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:11:04.051 "is_configured": true, 00:11:04.051 "data_offset": 2048, 00:11:04.051 "data_size": 63488 00:11:04.051 } 00:11:04.051 ] 00:11:04.051 }' 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.051 05:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.311 05:00:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:04.311 05:00:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.311 05:00:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.311 [2024-12-14 05:00:15.100215] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:04.311 [2024-12-14 05:00:15.100434] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:04.311 [2024-12-14 05:00:15.100506] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:04.311 [2024-12-14 05:00:15.100597] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:04.311 [2024-12-14 05:00:15.104620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1a10 00:11:04.311 05:00:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.311 05:00:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:04.311 [2024-12-14 05:00:15.106529] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:05.249 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:05.249 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:05.249 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:05.249 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:05.249 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:05.249 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.249 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.249 05:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.249 05:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.509 05:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.509 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:05.509 "name": "raid_bdev1", 00:11:05.509 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:11:05.509 "strip_size_kb": 0, 00:11:05.509 "state": "online", 00:11:05.510 "raid_level": "raid1", 00:11:05.510 "superblock": true, 00:11:05.510 "num_base_bdevs": 2, 00:11:05.510 "num_base_bdevs_discovered": 2, 00:11:05.510 "num_base_bdevs_operational": 2, 00:11:05.510 "process": { 00:11:05.510 "type": "rebuild", 00:11:05.510 "target": "spare", 00:11:05.510 "progress": { 00:11:05.510 "blocks": 20480, 00:11:05.510 "percent": 32 00:11:05.510 } 00:11:05.510 }, 00:11:05.510 "base_bdevs_list": [ 00:11:05.510 { 00:11:05.510 "name": "spare", 00:11:05.510 "uuid": "0444b599-866b-521b-9a21-4856c3cbd92a", 00:11:05.510 "is_configured": true, 00:11:05.510 "data_offset": 2048, 00:11:05.510 "data_size": 63488 00:11:05.510 }, 00:11:05.510 { 00:11:05.510 "name": "BaseBdev2", 00:11:05.510 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:11:05.510 "is_configured": true, 00:11:05.510 "data_offset": 2048, 00:11:05.510 "data_size": 63488 00:11:05.510 } 00:11:05.510 ] 00:11:05.510 }' 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.510 [2024-12-14 05:00:16.259433] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:05.510 [2024-12-14 05:00:16.310461] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:05.510 [2024-12-14 05:00:16.310512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.510 [2024-12-14 05:00:16.310527] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:05.510 [2024-12-14 05:00:16.310534] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.510 "name": "raid_bdev1", 00:11:05.510 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:11:05.510 "strip_size_kb": 0, 00:11:05.510 "state": "online", 00:11:05.510 "raid_level": "raid1", 00:11:05.510 "superblock": true, 00:11:05.510 "num_base_bdevs": 2, 00:11:05.510 "num_base_bdevs_discovered": 1, 00:11:05.510 "num_base_bdevs_operational": 1, 00:11:05.510 "base_bdevs_list": [ 00:11:05.510 { 00:11:05.510 "name": null, 00:11:05.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.510 "is_configured": false, 00:11:05.510 "data_offset": 0, 00:11:05.510 "data_size": 63488 00:11:05.510 }, 00:11:05.510 { 00:11:05.510 "name": "BaseBdev2", 00:11:05.510 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:11:05.510 "is_configured": true, 00:11:05.510 "data_offset": 2048, 00:11:05.510 "data_size": 63488 00:11:05.510 } 00:11:05.510 ] 00:11:05.510 }' 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.510 05:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.079 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:06.079 05:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.079 05:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.079 [2024-12-14 05:00:16.741839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:06.079 [2024-12-14 05:00:16.741940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.079 [2024-12-14 05:00:16.741982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:06.079 [2024-12-14 05:00:16.742010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.079 [2024-12-14 05:00:16.742510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.079 [2024-12-14 05:00:16.742576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:06.079 [2024-12-14 05:00:16.742709] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:06.079 [2024-12-14 05:00:16.742755] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:06.079 [2024-12-14 05:00:16.742818] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:06.079 [2024-12-14 05:00:16.742891] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:06.079 [2024-12-14 05:00:16.746910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:11:06.079 spare 00:11:06.079 05:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.079 05:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:06.079 [2024-12-14 05:00:16.748818] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:07.018 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:07.018 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:07.018 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:07.018 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:07.018 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:07.018 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.018 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.019 05:00:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.019 05:00:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.019 05:00:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.019 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:07.019 "name": "raid_bdev1", 00:11:07.019 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:11:07.019 "strip_size_kb": 0, 00:11:07.019 "state": "online", 00:11:07.019 "raid_level": "raid1", 00:11:07.019 "superblock": true, 00:11:07.019 "num_base_bdevs": 2, 00:11:07.019 "num_base_bdevs_discovered": 2, 00:11:07.019 "num_base_bdevs_operational": 2, 00:11:07.019 "process": { 00:11:07.019 "type": "rebuild", 00:11:07.019 "target": "spare", 00:11:07.019 "progress": { 00:11:07.019 "blocks": 20480, 00:11:07.019 "percent": 32 00:11:07.019 } 00:11:07.019 }, 00:11:07.019 "base_bdevs_list": [ 00:11:07.019 { 00:11:07.019 "name": "spare", 00:11:07.019 "uuid": "0444b599-866b-521b-9a21-4856c3cbd92a", 00:11:07.019 "is_configured": true, 00:11:07.019 "data_offset": 2048, 00:11:07.019 "data_size": 63488 00:11:07.019 }, 00:11:07.019 { 00:11:07.019 "name": "BaseBdev2", 00:11:07.019 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:11:07.019 "is_configured": true, 00:11:07.019 "data_offset": 2048, 00:11:07.019 "data_size": 63488 00:11:07.019 } 00:11:07.019 ] 00:11:07.019 }' 00:11:07.019 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:07.019 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:07.019 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:07.279 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:07.279 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:07.279 05:00:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.279 05:00:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.279 [2024-12-14 05:00:17.905019] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:07.279 [2024-12-14 05:00:17.952860] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:07.279 [2024-12-14 05:00:17.952993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.279 [2024-12-14 05:00:17.953031] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:07.279 [2024-12-14 05:00:17.953082] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:07.279 05:00:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.279 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:07.279 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.279 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.279 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.279 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.279 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:07.279 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.279 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.279 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.279 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.279 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.279 05:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.279 05:00:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.279 05:00:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.279 05:00:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.279 05:00:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.279 "name": "raid_bdev1", 00:11:07.279 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:11:07.279 "strip_size_kb": 0, 00:11:07.279 "state": "online", 00:11:07.279 "raid_level": "raid1", 00:11:07.279 "superblock": true, 00:11:07.279 "num_base_bdevs": 2, 00:11:07.279 "num_base_bdevs_discovered": 1, 00:11:07.279 "num_base_bdevs_operational": 1, 00:11:07.279 "base_bdevs_list": [ 00:11:07.279 { 00:11:07.279 "name": null, 00:11:07.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.279 "is_configured": false, 00:11:07.279 "data_offset": 0, 00:11:07.279 "data_size": 63488 00:11:07.279 }, 00:11:07.279 { 00:11:07.279 "name": "BaseBdev2", 00:11:07.279 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:11:07.279 "is_configured": true, 00:11:07.279 "data_offset": 2048, 00:11:07.279 "data_size": 63488 00:11:07.279 } 00:11:07.279 ] 00:11:07.279 }' 00:11:07.279 05:00:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.279 05:00:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.539 05:00:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:07.539 05:00:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:07.539 05:00:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:07.539 05:00:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:07.539 05:00:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:07.539 05:00:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.539 05:00:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.539 05:00:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.539 05:00:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.539 05:00:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.799 05:00:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:07.799 "name": "raid_bdev1", 00:11:07.799 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:11:07.799 "strip_size_kb": 0, 00:11:07.799 "state": "online", 00:11:07.799 "raid_level": "raid1", 00:11:07.799 "superblock": true, 00:11:07.799 "num_base_bdevs": 2, 00:11:07.799 "num_base_bdevs_discovered": 1, 00:11:07.799 "num_base_bdevs_operational": 1, 00:11:07.799 "base_bdevs_list": [ 00:11:07.799 { 00:11:07.799 "name": null, 00:11:07.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.799 "is_configured": false, 00:11:07.799 "data_offset": 0, 00:11:07.799 "data_size": 63488 00:11:07.799 }, 00:11:07.799 { 00:11:07.799 "name": "BaseBdev2", 00:11:07.799 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:11:07.799 "is_configured": true, 00:11:07.799 "data_offset": 2048, 00:11:07.799 "data_size": 63488 00:11:07.799 } 00:11:07.799 ] 00:11:07.799 }' 00:11:07.799 05:00:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:07.799 05:00:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:07.799 05:00:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:07.799 05:00:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:07.799 05:00:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:07.799 05:00:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.799 05:00:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.799 05:00:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.799 05:00:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:07.799 05:00:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.799 05:00:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.799 [2024-12-14 05:00:18.528386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:07.799 [2024-12-14 05:00:18.528441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.799 [2024-12-14 05:00:18.528459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:07.799 [2024-12-14 05:00:18.528469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.799 [2024-12-14 05:00:18.528831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.799 [2024-12-14 05:00:18.528849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:07.799 [2024-12-14 05:00:18.528912] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:07.799 [2024-12-14 05:00:18.528932] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:07.799 [2024-12-14 05:00:18.528940] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:07.799 [2024-12-14 05:00:18.528951] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:07.799 BaseBdev1 00:11:07.799 05:00:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.799 05:00:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:08.748 05:00:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:08.748 05:00:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.748 05:00:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.748 05:00:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.748 05:00:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.748 05:00:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:08.748 05:00:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.748 05:00:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.748 05:00:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.748 05:00:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.748 05:00:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.748 05:00:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.748 05:00:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.748 05:00:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.748 05:00:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.748 05:00:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.748 "name": "raid_bdev1", 00:11:08.748 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:11:08.748 "strip_size_kb": 0, 00:11:08.748 "state": "online", 00:11:08.748 "raid_level": "raid1", 00:11:08.748 "superblock": true, 00:11:08.748 "num_base_bdevs": 2, 00:11:08.748 "num_base_bdevs_discovered": 1, 00:11:08.748 "num_base_bdevs_operational": 1, 00:11:08.748 "base_bdevs_list": [ 00:11:08.748 { 00:11:08.748 "name": null, 00:11:08.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.748 "is_configured": false, 00:11:08.748 "data_offset": 0, 00:11:08.748 "data_size": 63488 00:11:08.748 }, 00:11:08.748 { 00:11:08.748 "name": "BaseBdev2", 00:11:08.748 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:11:08.748 "is_configured": true, 00:11:08.748 "data_offset": 2048, 00:11:08.748 "data_size": 63488 00:11:08.748 } 00:11:08.748 ] 00:11:08.748 }' 00:11:08.748 05:00:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.748 05:00:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.330 05:00:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:09.330 05:00:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:09.330 05:00:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:09.330 05:00:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:09.330 05:00:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:09.330 05:00:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.330 05:00:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.330 05:00:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.330 05:00:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.330 05:00:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.330 05:00:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:09.330 "name": "raid_bdev1", 00:11:09.330 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:11:09.330 "strip_size_kb": 0, 00:11:09.330 "state": "online", 00:11:09.330 "raid_level": "raid1", 00:11:09.330 "superblock": true, 00:11:09.330 "num_base_bdevs": 2, 00:11:09.330 "num_base_bdevs_discovered": 1, 00:11:09.330 "num_base_bdevs_operational": 1, 00:11:09.330 "base_bdevs_list": [ 00:11:09.330 { 00:11:09.330 "name": null, 00:11:09.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.330 "is_configured": false, 00:11:09.330 "data_offset": 0, 00:11:09.330 "data_size": 63488 00:11:09.330 }, 00:11:09.330 { 00:11:09.330 "name": "BaseBdev2", 00:11:09.330 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:11:09.330 "is_configured": true, 00:11:09.330 "data_offset": 2048, 00:11:09.330 "data_size": 63488 00:11:09.330 } 00:11:09.330 ] 00:11:09.330 }' 00:11:09.330 05:00:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:09.330 05:00:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:09.330 05:00:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:09.330 05:00:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:09.330 05:00:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:09.330 05:00:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:11:09.330 05:00:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:09.330 05:00:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:09.330 05:00:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:09.330 05:00:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:09.330 05:00:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:09.330 05:00:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:09.331 05:00:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.331 05:00:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.331 [2024-12-14 05:00:20.138061] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.331 [2024-12-14 05:00:20.138307] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:09.331 [2024-12-14 05:00:20.138375] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:09.331 request: 00:11:09.331 { 00:11:09.331 "base_bdev": "BaseBdev1", 00:11:09.331 "raid_bdev": "raid_bdev1", 00:11:09.331 "method": "bdev_raid_add_base_bdev", 00:11:09.331 "req_id": 1 00:11:09.331 } 00:11:09.331 Got JSON-RPC error response 00:11:09.331 response: 00:11:09.331 { 00:11:09.331 "code": -22, 00:11:09.331 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:09.331 } 00:11:09.331 05:00:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:09.331 05:00:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:11:09.331 05:00:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:09.331 05:00:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:09.331 05:00:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:09.331 05:00:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.711 "name": "raid_bdev1", 00:11:10.711 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:11:10.711 "strip_size_kb": 0, 00:11:10.711 "state": "online", 00:11:10.711 "raid_level": "raid1", 00:11:10.711 "superblock": true, 00:11:10.711 "num_base_bdevs": 2, 00:11:10.711 "num_base_bdevs_discovered": 1, 00:11:10.711 "num_base_bdevs_operational": 1, 00:11:10.711 "base_bdevs_list": [ 00:11:10.711 { 00:11:10.711 "name": null, 00:11:10.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.711 "is_configured": false, 00:11:10.711 "data_offset": 0, 00:11:10.711 "data_size": 63488 00:11:10.711 }, 00:11:10.711 { 00:11:10.711 "name": "BaseBdev2", 00:11:10.711 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:11:10.711 "is_configured": true, 00:11:10.711 "data_offset": 2048, 00:11:10.711 "data_size": 63488 00:11:10.711 } 00:11:10.711 ] 00:11:10.711 }' 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:10.711 "name": "raid_bdev1", 00:11:10.711 "uuid": "6061cdf4-4d05-4d8d-ad29-63778ed4c8ae", 00:11:10.711 "strip_size_kb": 0, 00:11:10.711 "state": "online", 00:11:10.711 "raid_level": "raid1", 00:11:10.711 "superblock": true, 00:11:10.711 "num_base_bdevs": 2, 00:11:10.711 "num_base_bdevs_discovered": 1, 00:11:10.711 "num_base_bdevs_operational": 1, 00:11:10.711 "base_bdevs_list": [ 00:11:10.711 { 00:11:10.711 "name": null, 00:11:10.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.711 "is_configured": false, 00:11:10.711 "data_offset": 0, 00:11:10.711 "data_size": 63488 00:11:10.711 }, 00:11:10.711 { 00:11:10.711 "name": "BaseBdev2", 00:11:10.711 "uuid": "f1b1c70d-a5a4-5df0-bb85-76ceb0c72cd7", 00:11:10.711 "is_configured": true, 00:11:10.711 "data_offset": 2048, 00:11:10.711 "data_size": 63488 00:11:10.711 } 00:11:10.711 ] 00:11:10.711 }' 00:11:10.711 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:10.971 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:10.971 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:10.971 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:10.971 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86411 00:11:10.971 05:00:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 86411 ']' 00:11:10.971 05:00:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 86411 00:11:10.971 05:00:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:10.971 05:00:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:10.971 05:00:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86411 00:11:10.971 05:00:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:10.971 05:00:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:10.971 05:00:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86411' 00:11:10.971 killing process with pid 86411 00:11:10.971 05:00:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 86411 00:11:10.971 Received shutdown signal, test time was about 60.000000 seconds 00:11:10.971 00:11:10.971 Latency(us) 00:11:10.971 [2024-12-14T05:00:21.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:10.971 [2024-12-14T05:00:21.854Z] =================================================================================================================== 00:11:10.971 [2024-12-14T05:00:21.854Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:10.971 [2024-12-14 05:00:21.718437] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.971 [2024-12-14 05:00:21.718559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.971 05:00:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 86411 00:11:10.971 [2024-12-14 05:00:21.718613] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.971 [2024-12-14 05:00:21.718623] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:11:10.971 [2024-12-14 05:00:21.750074] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:11.231 05:00:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:11:11.231 00:11:11.231 real 0m20.784s 00:11:11.231 user 0m25.824s 00:11:11.231 sys 0m3.463s 00:11:11.231 05:00:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:11.231 05:00:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.231 ************************************ 00:11:11.231 END TEST raid_rebuild_test_sb 00:11:11.231 ************************************ 00:11:11.231 05:00:22 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:11:11.231 05:00:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:11.231 05:00:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.231 05:00:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:11.231 ************************************ 00:11:11.231 START TEST raid_rebuild_test_io 00:11:11.231 ************************************ 00:11:11.231 05:00:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:11:11.231 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:11.231 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:11.231 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87113 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87113 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 87113 ']' 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:11.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:11.232 05:00:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:11.492 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:11.492 Zero copy mechanism will not be used. 00:11:11.492 [2024-12-14 05:00:22.157411] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:11.492 [2024-12-14 05:00:22.157524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87113 ] 00:11:11.492 [2024-12-14 05:00:22.318111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.492 [2024-12-14 05:00:22.363871] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.751 [2024-12-14 05:00:22.405757] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.751 [2024-12-14 05:00:22.405796] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.322 05:00:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:12.322 05:00:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:11:12.322 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:12.322 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:12.322 05:00:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.322 05:00:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.322 BaseBdev1_malloc 00:11:12.322 05:00:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.322 05:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:12.322 05:00:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.322 05:00:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.322 [2024-12-14 05:00:22.999689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:12.322 [2024-12-14 05:00:22.999814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.322 [2024-12-14 05:00:22.999862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:12.322 [2024-12-14 05:00:22.999906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.322 [2024-12-14 05:00:23.002167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.322 [2024-12-14 05:00:23.002259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:12.322 BaseBdev1 00:11:12.322 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.322 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:12.322 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:12.322 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.322 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.322 BaseBdev2_malloc 00:11:12.322 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.322 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:12.322 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.322 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.322 [2024-12-14 05:00:23.040889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:12.322 [2024-12-14 05:00:23.040988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.322 [2024-12-14 05:00:23.041032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:12.322 [2024-12-14 05:00:23.041052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.322 [2024-12-14 05:00:23.045512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.322 [2024-12-14 05:00:23.045582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:12.322 BaseBdev2 00:11:12.322 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.322 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:12.322 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.322 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.322 spare_malloc 00:11:12.322 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.322 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:12.322 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.322 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.322 spare_delay 00:11:12.322 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.322 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:12.322 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.322 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.322 [2024-12-14 05:00:23.083396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:12.322 [2024-12-14 05:00:23.083506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.322 [2024-12-14 05:00:23.083545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:12.322 [2024-12-14 05:00:23.083582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.323 [2024-12-14 05:00:23.085638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.323 [2024-12-14 05:00:23.085722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:12.323 spare 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.323 [2024-12-14 05:00:23.095390] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.323 [2024-12-14 05:00:23.097159] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.323 [2024-12-14 05:00:23.097248] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:12.323 [2024-12-14 05:00:23.097261] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:12.323 [2024-12-14 05:00:23.097511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:12.323 [2024-12-14 05:00:23.097615] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:12.323 [2024-12-14 05:00:23.097632] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:12.323 [2024-12-14 05:00:23.097741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.323 "name": "raid_bdev1", 00:11:12.323 "uuid": "df207745-b2b5-4bc1-bd72-8707e3e5cb79", 00:11:12.323 "strip_size_kb": 0, 00:11:12.323 "state": "online", 00:11:12.323 "raid_level": "raid1", 00:11:12.323 "superblock": false, 00:11:12.323 "num_base_bdevs": 2, 00:11:12.323 "num_base_bdevs_discovered": 2, 00:11:12.323 "num_base_bdevs_operational": 2, 00:11:12.323 "base_bdevs_list": [ 00:11:12.323 { 00:11:12.323 "name": "BaseBdev1", 00:11:12.323 "uuid": "ba78bf0f-4137-5a03-bef6-e5d3ff79a1c4", 00:11:12.323 "is_configured": true, 00:11:12.323 "data_offset": 0, 00:11:12.323 "data_size": 65536 00:11:12.323 }, 00:11:12.323 { 00:11:12.323 "name": "BaseBdev2", 00:11:12.323 "uuid": "c6ab9ce7-814f-51db-89b9-c3eb6a59397c", 00:11:12.323 "is_configured": true, 00:11:12.323 "data_offset": 0, 00:11:12.323 "data_size": 65536 00:11:12.323 } 00:11:12.323 ] 00:11:12.323 }' 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.323 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.893 [2024-12-14 05:00:23.546982] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.893 [2024-12-14 05:00:23.626547] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.893 "name": "raid_bdev1", 00:11:12.893 "uuid": "df207745-b2b5-4bc1-bd72-8707e3e5cb79", 00:11:12.893 "strip_size_kb": 0, 00:11:12.893 "state": "online", 00:11:12.893 "raid_level": "raid1", 00:11:12.893 "superblock": false, 00:11:12.893 "num_base_bdevs": 2, 00:11:12.893 "num_base_bdevs_discovered": 1, 00:11:12.893 "num_base_bdevs_operational": 1, 00:11:12.893 "base_bdevs_list": [ 00:11:12.893 { 00:11:12.893 "name": null, 00:11:12.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.893 "is_configured": false, 00:11:12.893 "data_offset": 0, 00:11:12.893 "data_size": 65536 00:11:12.893 }, 00:11:12.893 { 00:11:12.893 "name": "BaseBdev2", 00:11:12.893 "uuid": "c6ab9ce7-814f-51db-89b9-c3eb6a59397c", 00:11:12.893 "is_configured": true, 00:11:12.893 "data_offset": 0, 00:11:12.893 "data_size": 65536 00:11:12.893 } 00:11:12.893 ] 00:11:12.893 }' 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.893 05:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.893 [2024-12-14 05:00:23.696490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:12.893 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:12.893 Zero copy mechanism will not be used. 00:11:12.893 Running I/O for 60 seconds... 00:11:13.463 05:00:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:13.463 05:00:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.463 05:00:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:13.463 [2024-12-14 05:00:24.080888] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:13.463 05:00:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.463 05:00:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:13.463 [2024-12-14 05:00:24.127407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:13.463 [2024-12-14 05:00:24.129378] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:13.463 [2024-12-14 05:00:24.236778] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:13.463 [2024-12-14 05:00:24.237295] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:13.723 [2024-12-14 05:00:24.451397] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:13.723 [2024-12-14 05:00:24.451746] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:13.983 [2024-12-14 05:00:24.690698] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:13.983 179.00 IOPS, 537.00 MiB/s [2024-12-14T05:00:24.866Z] [2024-12-14 05:00:24.810517] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:13.983 [2024-12-14 05:00:24.810852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:14.243 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:14.243 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:14.243 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:14.243 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:14.243 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:14.502 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.502 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.502 05:00:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.502 05:00:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.502 05:00:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.502 [2024-12-14 05:00:25.167569] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:14.502 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:14.502 "name": "raid_bdev1", 00:11:14.502 "uuid": "df207745-b2b5-4bc1-bd72-8707e3e5cb79", 00:11:14.502 "strip_size_kb": 0, 00:11:14.502 "state": "online", 00:11:14.502 "raid_level": "raid1", 00:11:14.502 "superblock": false, 00:11:14.502 "num_base_bdevs": 2, 00:11:14.502 "num_base_bdevs_discovered": 2, 00:11:14.502 "num_base_bdevs_operational": 2, 00:11:14.502 "process": { 00:11:14.502 "type": "rebuild", 00:11:14.502 "target": "spare", 00:11:14.502 "progress": { 00:11:14.502 "blocks": 14336, 00:11:14.502 "percent": 21 00:11:14.502 } 00:11:14.502 }, 00:11:14.502 "base_bdevs_list": [ 00:11:14.502 { 00:11:14.502 "name": "spare", 00:11:14.502 "uuid": "e13b9701-324e-5b54-86a4-429e46a5a1ce", 00:11:14.502 "is_configured": true, 00:11:14.502 "data_offset": 0, 00:11:14.502 "data_size": 65536 00:11:14.502 }, 00:11:14.502 { 00:11:14.502 "name": "BaseBdev2", 00:11:14.502 "uuid": "c6ab9ce7-814f-51db-89b9-c3eb6a59397c", 00:11:14.502 "is_configured": true, 00:11:14.502 "data_offset": 0, 00:11:14.502 "data_size": 65536 00:11:14.502 } 00:11:14.502 ] 00:11:14.502 }' 00:11:14.502 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:14.502 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:14.502 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:14.502 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:14.502 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:14.502 05:00:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.502 05:00:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.502 [2024-12-14 05:00:25.264441] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:14.502 [2024-12-14 05:00:25.274380] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:14.502 [2024-12-14 05:00:25.274679] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:14.761 [2024-12-14 05:00:25.386729] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:14.761 [2024-12-14 05:00:25.399284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.761 [2024-12-14 05:00:25.399359] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:14.761 [2024-12-14 05:00:25.399386] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:14.761 [2024-12-14 05:00:25.405677] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:11:14.761 05:00:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.761 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:14.761 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.761 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.761 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.761 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.761 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:14.761 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.761 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.761 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.761 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.761 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.761 05:00:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.761 05:00:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.761 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.761 05:00:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.761 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.761 "name": "raid_bdev1", 00:11:14.761 "uuid": "df207745-b2b5-4bc1-bd72-8707e3e5cb79", 00:11:14.761 "strip_size_kb": 0, 00:11:14.761 "state": "online", 00:11:14.761 "raid_level": "raid1", 00:11:14.761 "superblock": false, 00:11:14.761 "num_base_bdevs": 2, 00:11:14.761 "num_base_bdevs_discovered": 1, 00:11:14.761 "num_base_bdevs_operational": 1, 00:11:14.761 "base_bdevs_list": [ 00:11:14.761 { 00:11:14.761 "name": null, 00:11:14.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.761 "is_configured": false, 00:11:14.761 "data_offset": 0, 00:11:14.761 "data_size": 65536 00:11:14.761 }, 00:11:14.761 { 00:11:14.761 "name": "BaseBdev2", 00:11:14.761 "uuid": "c6ab9ce7-814f-51db-89b9-c3eb6a59397c", 00:11:14.761 "is_configured": true, 00:11:14.761 "data_offset": 0, 00:11:14.761 "data_size": 65536 00:11:14.761 } 00:11:14.761 ] 00:11:14.761 }' 00:11:14.761 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.761 05:00:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.021 164.50 IOPS, 493.50 MiB/s [2024-12-14T05:00:25.904Z] 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:15.021 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:15.022 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:15.022 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:15.022 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:15.022 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.022 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.022 05:00:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.022 05:00:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.022 05:00:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.022 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:15.022 "name": "raid_bdev1", 00:11:15.022 "uuid": "df207745-b2b5-4bc1-bd72-8707e3e5cb79", 00:11:15.022 "strip_size_kb": 0, 00:11:15.022 "state": "online", 00:11:15.022 "raid_level": "raid1", 00:11:15.022 "superblock": false, 00:11:15.022 "num_base_bdevs": 2, 00:11:15.022 "num_base_bdevs_discovered": 1, 00:11:15.022 "num_base_bdevs_operational": 1, 00:11:15.022 "base_bdevs_list": [ 00:11:15.022 { 00:11:15.022 "name": null, 00:11:15.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.022 "is_configured": false, 00:11:15.022 "data_offset": 0, 00:11:15.022 "data_size": 65536 00:11:15.022 }, 00:11:15.022 { 00:11:15.022 "name": "BaseBdev2", 00:11:15.022 "uuid": "c6ab9ce7-814f-51db-89b9-c3eb6a59397c", 00:11:15.022 "is_configured": true, 00:11:15.022 "data_offset": 0, 00:11:15.022 "data_size": 65536 00:11:15.022 } 00:11:15.022 ] 00:11:15.022 }' 00:11:15.022 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:15.022 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:15.022 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:15.022 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:15.022 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:15.022 05:00:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.022 05:00:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.022 [2024-12-14 05:00:25.902326] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:15.281 05:00:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.281 05:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:15.281 [2024-12-14 05:00:25.954478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:15.281 [2024-12-14 05:00:25.956438] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:15.281 [2024-12-14 05:00:26.068980] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:15.281 [2024-12-14 05:00:26.069455] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:15.541 [2024-12-14 05:00:26.282364] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:15.541 [2024-12-14 05:00:26.282624] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:16.111 162.33 IOPS, 487.00 MiB/s [2024-12-14T05:00:26.994Z] [2024-12-14 05:00:26.745994] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:16.111 05:00:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:16.111 05:00:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:16.111 05:00:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:16.111 05:00:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:16.111 05:00:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:16.111 05:00:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.111 05:00:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.111 05:00:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.111 05:00:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.111 05:00:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.111 05:00:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:16.111 "name": "raid_bdev1", 00:11:16.111 "uuid": "df207745-b2b5-4bc1-bd72-8707e3e5cb79", 00:11:16.111 "strip_size_kb": 0, 00:11:16.111 "state": "online", 00:11:16.111 "raid_level": "raid1", 00:11:16.111 "superblock": false, 00:11:16.111 "num_base_bdevs": 2, 00:11:16.111 "num_base_bdevs_discovered": 2, 00:11:16.111 "num_base_bdevs_operational": 2, 00:11:16.111 "process": { 00:11:16.111 "type": "rebuild", 00:11:16.111 "target": "spare", 00:11:16.111 "progress": { 00:11:16.111 "blocks": 10240, 00:11:16.111 "percent": 15 00:11:16.111 } 00:11:16.111 }, 00:11:16.111 "base_bdevs_list": [ 00:11:16.111 { 00:11:16.111 "name": "spare", 00:11:16.111 "uuid": "e13b9701-324e-5b54-86a4-429e46a5a1ce", 00:11:16.111 "is_configured": true, 00:11:16.111 "data_offset": 0, 00:11:16.111 "data_size": 65536 00:11:16.111 }, 00:11:16.111 { 00:11:16.111 "name": "BaseBdev2", 00:11:16.111 "uuid": "c6ab9ce7-814f-51db-89b9-c3eb6a59397c", 00:11:16.111 "is_configured": true, 00:11:16.111 "data_offset": 0, 00:11:16.111 "data_size": 65536 00:11:16.111 } 00:11:16.111 ] 00:11:16.111 }' 00:11:16.371 05:00:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:16.371 05:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:16.371 05:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:16.371 05:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:16.371 05:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:16.371 05:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:16.371 05:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:16.372 05:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:16.372 05:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=318 00:11:16.372 05:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:16.372 05:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:16.372 05:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:16.372 05:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:16.372 05:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:16.372 05:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:16.372 05:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.372 05:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.372 05:00:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.372 05:00:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.372 [2024-12-14 05:00:27.080317] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:16.372 [2024-12-14 05:00:27.080812] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:16.372 05:00:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.372 05:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:16.372 "name": "raid_bdev1", 00:11:16.372 "uuid": "df207745-b2b5-4bc1-bd72-8707e3e5cb79", 00:11:16.372 "strip_size_kb": 0, 00:11:16.372 "state": "online", 00:11:16.372 "raid_level": "raid1", 00:11:16.372 "superblock": false, 00:11:16.372 "num_base_bdevs": 2, 00:11:16.372 "num_base_bdevs_discovered": 2, 00:11:16.372 "num_base_bdevs_operational": 2, 00:11:16.372 "process": { 00:11:16.372 "type": "rebuild", 00:11:16.372 "target": "spare", 00:11:16.372 "progress": { 00:11:16.372 "blocks": 12288, 00:11:16.372 "percent": 18 00:11:16.372 } 00:11:16.372 }, 00:11:16.372 "base_bdevs_list": [ 00:11:16.372 { 00:11:16.372 "name": "spare", 00:11:16.372 "uuid": "e13b9701-324e-5b54-86a4-429e46a5a1ce", 00:11:16.372 "is_configured": true, 00:11:16.372 "data_offset": 0, 00:11:16.372 "data_size": 65536 00:11:16.372 }, 00:11:16.372 { 00:11:16.372 "name": "BaseBdev2", 00:11:16.372 "uuid": "c6ab9ce7-814f-51db-89b9-c3eb6a59397c", 00:11:16.372 "is_configured": true, 00:11:16.372 "data_offset": 0, 00:11:16.372 "data_size": 65536 00:11:16.372 } 00:11:16.372 ] 00:11:16.372 }' 00:11:16.372 05:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:16.372 05:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:16.372 05:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:16.372 05:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:16.372 05:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:16.632 [2024-12-14 05:00:27.508902] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:16.892 [2024-12-14 05:00:27.616209] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:16.892 [2024-12-14 05:00:27.616518] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:17.151 143.25 IOPS, 429.75 MiB/s [2024-12-14T05:00:28.034Z] [2024-12-14 05:00:27.956139] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:17.412 [2024-12-14 05:00:28.059558] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:17.412 [2024-12-14 05:00:28.059909] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:17.412 05:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:17.412 05:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:17.412 05:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:17.412 05:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:17.412 05:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:17.412 05:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:17.412 05:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.412 05:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.412 05:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.412 05:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:17.412 05:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.412 05:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:17.412 "name": "raid_bdev1", 00:11:17.412 "uuid": "df207745-b2b5-4bc1-bd72-8707e3e5cb79", 00:11:17.412 "strip_size_kb": 0, 00:11:17.412 "state": "online", 00:11:17.412 "raid_level": "raid1", 00:11:17.412 "superblock": false, 00:11:17.412 "num_base_bdevs": 2, 00:11:17.412 "num_base_bdevs_discovered": 2, 00:11:17.412 "num_base_bdevs_operational": 2, 00:11:17.412 "process": { 00:11:17.412 "type": "rebuild", 00:11:17.412 "target": "spare", 00:11:17.412 "progress": { 00:11:17.412 "blocks": 28672, 00:11:17.412 "percent": 43 00:11:17.412 } 00:11:17.412 }, 00:11:17.412 "base_bdevs_list": [ 00:11:17.412 { 00:11:17.412 "name": "spare", 00:11:17.412 "uuid": "e13b9701-324e-5b54-86a4-429e46a5a1ce", 00:11:17.412 "is_configured": true, 00:11:17.412 "data_offset": 0, 00:11:17.412 "data_size": 65536 00:11:17.412 }, 00:11:17.412 { 00:11:17.412 "name": "BaseBdev2", 00:11:17.412 "uuid": "c6ab9ce7-814f-51db-89b9-c3eb6a59397c", 00:11:17.412 "is_configured": true, 00:11:17.412 "data_offset": 0, 00:11:17.412 "data_size": 65536 00:11:17.412 } 00:11:17.412 ] 00:11:17.412 }' 00:11:17.412 05:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:17.672 05:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:17.672 05:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:17.672 05:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:17.672 05:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:17.672 [2024-12-14 05:00:28.380997] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:11:17.672 [2024-12-14 05:00:28.381435] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:11:17.672 [2024-12-14 05:00:28.500508] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:18.192 123.20 IOPS, 369.60 MiB/s [2024-12-14T05:00:29.075Z] [2024-12-14 05:00:28.837813] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:11:18.192 [2024-12-14 05:00:29.056486] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:18.762 05:00:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:18.762 05:00:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:18.762 05:00:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:18.762 05:00:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:18.762 05:00:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:18.762 05:00:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:18.762 05:00:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.762 05:00:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.762 05:00:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.762 05:00:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.762 05:00:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.762 05:00:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:18.762 "name": "raid_bdev1", 00:11:18.762 "uuid": "df207745-b2b5-4bc1-bd72-8707e3e5cb79", 00:11:18.762 "strip_size_kb": 0, 00:11:18.762 "state": "online", 00:11:18.762 "raid_level": "raid1", 00:11:18.762 "superblock": false, 00:11:18.762 "num_base_bdevs": 2, 00:11:18.762 "num_base_bdevs_discovered": 2, 00:11:18.762 "num_base_bdevs_operational": 2, 00:11:18.762 "process": { 00:11:18.762 "type": "rebuild", 00:11:18.762 "target": "spare", 00:11:18.762 "progress": { 00:11:18.762 "blocks": 43008, 00:11:18.762 "percent": 65 00:11:18.762 } 00:11:18.762 }, 00:11:18.762 "base_bdevs_list": [ 00:11:18.762 { 00:11:18.762 "name": "spare", 00:11:18.762 "uuid": "e13b9701-324e-5b54-86a4-429e46a5a1ce", 00:11:18.762 "is_configured": true, 00:11:18.762 "data_offset": 0, 00:11:18.762 "data_size": 65536 00:11:18.762 }, 00:11:18.762 { 00:11:18.762 "name": "BaseBdev2", 00:11:18.762 "uuid": "c6ab9ce7-814f-51db-89b9-c3eb6a59397c", 00:11:18.762 "is_configured": true, 00:11:18.762 "data_offset": 0, 00:11:18.762 "data_size": 65536 00:11:18.762 } 00:11:18.762 ] 00:11:18.762 }' 00:11:18.762 05:00:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:18.762 05:00:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:18.762 05:00:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:18.762 05:00:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:18.762 05:00:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:19.281 107.83 IOPS, 323.50 MiB/s [2024-12-14T05:00:30.164Z] [2024-12-14 05:00:30.049717] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:11:19.852 05:00:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:19.852 05:00:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:19.852 05:00:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:19.852 05:00:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:19.852 05:00:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:19.852 05:00:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:19.852 05:00:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.852 05:00:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.852 05:00:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.852 05:00:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.852 05:00:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.852 05:00:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:19.852 "name": "raid_bdev1", 00:11:19.852 "uuid": "df207745-b2b5-4bc1-bd72-8707e3e5cb79", 00:11:19.852 "strip_size_kb": 0, 00:11:19.852 "state": "online", 00:11:19.852 "raid_level": "raid1", 00:11:19.852 "superblock": false, 00:11:19.852 "num_base_bdevs": 2, 00:11:19.852 "num_base_bdevs_discovered": 2, 00:11:19.852 "num_base_bdevs_operational": 2, 00:11:19.852 "process": { 00:11:19.852 "type": "rebuild", 00:11:19.852 "target": "spare", 00:11:19.852 "progress": { 00:11:19.852 "blocks": 63488, 00:11:19.852 "percent": 96 00:11:19.852 } 00:11:19.852 }, 00:11:19.852 "base_bdevs_list": [ 00:11:19.852 { 00:11:19.852 "name": "spare", 00:11:19.852 "uuid": "e13b9701-324e-5b54-86a4-429e46a5a1ce", 00:11:19.852 "is_configured": true, 00:11:19.852 "data_offset": 0, 00:11:19.852 "data_size": 65536 00:11:19.852 }, 00:11:19.852 { 00:11:19.852 "name": "BaseBdev2", 00:11:19.852 "uuid": "c6ab9ce7-814f-51db-89b9-c3eb6a59397c", 00:11:19.852 "is_configured": true, 00:11:19.852 "data_offset": 0, 00:11:19.852 "data_size": 65536 00:11:19.852 } 00:11:19.852 ] 00:11:19.852 }' 00:11:19.852 05:00:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:19.852 [2024-12-14 05:00:30.584783] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:19.852 05:00:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:19.852 05:00:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:19.852 05:00:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:19.852 05:00:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:19.852 [2024-12-14 05:00:30.681279] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:19.852 [2024-12-14 05:00:30.688201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.791 98.43 IOPS, 295.29 MiB/s [2024-12-14T05:00:31.674Z] 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:20.791 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:20.791 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:20.791 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:20.791 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:20.791 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:20.791 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.791 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.791 05:00:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.791 05:00:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.791 05:00:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.051 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:21.051 "name": "raid_bdev1", 00:11:21.051 "uuid": "df207745-b2b5-4bc1-bd72-8707e3e5cb79", 00:11:21.051 "strip_size_kb": 0, 00:11:21.051 "state": "online", 00:11:21.051 "raid_level": "raid1", 00:11:21.051 "superblock": false, 00:11:21.051 "num_base_bdevs": 2, 00:11:21.051 "num_base_bdevs_discovered": 2, 00:11:21.051 "num_base_bdevs_operational": 2, 00:11:21.051 "base_bdevs_list": [ 00:11:21.051 { 00:11:21.051 "name": "spare", 00:11:21.051 "uuid": "e13b9701-324e-5b54-86a4-429e46a5a1ce", 00:11:21.051 "is_configured": true, 00:11:21.051 "data_offset": 0, 00:11:21.051 "data_size": 65536 00:11:21.051 }, 00:11:21.051 { 00:11:21.051 "name": "BaseBdev2", 00:11:21.051 "uuid": "c6ab9ce7-814f-51db-89b9-c3eb6a59397c", 00:11:21.051 "is_configured": true, 00:11:21.051 "data_offset": 0, 00:11:21.051 "data_size": 65536 00:11:21.051 } 00:11:21.051 ] 00:11:21.051 }' 00:11:21.051 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:21.051 91.62 IOPS, 274.88 MiB/s [2024-12-14T05:00:31.934Z] 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:21.051 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:21.051 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:21.051 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:11:21.051 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:21.051 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:21.051 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:21.051 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:21.051 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:21.051 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.051 05:00:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.051 05:00:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:21.051 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.051 05:00:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.051 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:21.051 "name": "raid_bdev1", 00:11:21.051 "uuid": "df207745-b2b5-4bc1-bd72-8707e3e5cb79", 00:11:21.051 "strip_size_kb": 0, 00:11:21.051 "state": "online", 00:11:21.051 "raid_level": "raid1", 00:11:21.051 "superblock": false, 00:11:21.051 "num_base_bdevs": 2, 00:11:21.051 "num_base_bdevs_discovered": 2, 00:11:21.052 "num_base_bdevs_operational": 2, 00:11:21.052 "base_bdevs_list": [ 00:11:21.052 { 00:11:21.052 "name": "spare", 00:11:21.052 "uuid": "e13b9701-324e-5b54-86a4-429e46a5a1ce", 00:11:21.052 "is_configured": true, 00:11:21.052 "data_offset": 0, 00:11:21.052 "data_size": 65536 00:11:21.052 }, 00:11:21.052 { 00:11:21.052 "name": "BaseBdev2", 00:11:21.052 "uuid": "c6ab9ce7-814f-51db-89b9-c3eb6a59397c", 00:11:21.052 "is_configured": true, 00:11:21.052 "data_offset": 0, 00:11:21.052 "data_size": 65536 00:11:21.052 } 00:11:21.052 ] 00:11:21.052 }' 00:11:21.052 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:21.052 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:21.052 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:21.052 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:21.052 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:21.052 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.052 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.052 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.052 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.052 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:21.052 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.052 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.052 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.052 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.052 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.052 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.052 05:00:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.052 05:00:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:21.052 05:00:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.312 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.312 "name": "raid_bdev1", 00:11:21.312 "uuid": "df207745-b2b5-4bc1-bd72-8707e3e5cb79", 00:11:21.312 "strip_size_kb": 0, 00:11:21.312 "state": "online", 00:11:21.312 "raid_level": "raid1", 00:11:21.312 "superblock": false, 00:11:21.312 "num_base_bdevs": 2, 00:11:21.312 "num_base_bdevs_discovered": 2, 00:11:21.312 "num_base_bdevs_operational": 2, 00:11:21.312 "base_bdevs_list": [ 00:11:21.312 { 00:11:21.312 "name": "spare", 00:11:21.312 "uuid": "e13b9701-324e-5b54-86a4-429e46a5a1ce", 00:11:21.312 "is_configured": true, 00:11:21.312 "data_offset": 0, 00:11:21.312 "data_size": 65536 00:11:21.312 }, 00:11:21.312 { 00:11:21.312 "name": "BaseBdev2", 00:11:21.312 "uuid": "c6ab9ce7-814f-51db-89b9-c3eb6a59397c", 00:11:21.312 "is_configured": true, 00:11:21.312 "data_offset": 0, 00:11:21.312 "data_size": 65536 00:11:21.312 } 00:11:21.312 ] 00:11:21.312 }' 00:11:21.312 05:00:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.312 05:00:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:21.572 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:21.572 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.572 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:21.572 [2024-12-14 05:00:32.373865] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:21.572 [2024-12-14 05:00:32.373954] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:21.572 00:11:21.572 Latency(us) 00:11:21.572 [2024-12-14T05:00:32.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:21.572 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:21.572 raid_bdev1 : 8.72 87.92 263.76 0.00 0.00 15195.49 271.87 113557.58 00:11:21.572 [2024-12-14T05:00:32.455Z] =================================================================================================================== 00:11:21.572 [2024-12-14T05:00:32.455Z] Total : 87.92 263.76 0.00 0.00 15195.49 271.87 113557.58 00:11:21.572 [2024-12-14 05:00:32.408981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.572 [2024-12-14 05:00:32.409086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.572 [2024-12-14 05:00:32.409206] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.572 [2024-12-14 05:00:32.409265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:21.572 { 00:11:21.572 "results": [ 00:11:21.572 { 00:11:21.572 "job": "raid_bdev1", 00:11:21.572 "core_mask": "0x1", 00:11:21.572 "workload": "randrw", 00:11:21.572 "percentage": 50, 00:11:21.572 "status": "finished", 00:11:21.572 "queue_depth": 2, 00:11:21.572 "io_size": 3145728, 00:11:21.572 "runtime": 8.723899, 00:11:21.572 "iops": 87.91940392707436, 00:11:21.572 "mibps": 263.75821178122305, 00:11:21.572 "io_failed": 0, 00:11:21.572 "io_timeout": 0, 00:11:21.572 "avg_latency_us": 15195.49001554289, 00:11:21.572 "min_latency_us": 271.87423580786026, 00:11:21.572 "max_latency_us": 113557.57554585153 00:11:21.572 } 00:11:21.572 ], 00:11:21.572 "core_count": 1 00:11:21.572 } 00:11:21.572 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.572 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.572 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.572 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:21.572 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:21.572 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:21.833 /dev/nbd0 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:21.833 1+0 records in 00:11:21.833 1+0 records out 00:11:21.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571303 s, 7.2 MB/s 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:21.833 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:22.093 /dev/nbd1 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:22.093 1+0 records in 00:11:22.093 1+0 records out 00:11:22.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448821 s, 9.1 MB/s 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:22.093 05:00:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:22.353 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:22.353 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:22.353 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:22.353 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:22.353 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:22.353 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.353 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:22.353 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:22.353 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 87113 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 87113 ']' 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 87113 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87113 00:11:22.614 killing process with pid 87113 00:11:22.614 Received shutdown signal, test time was about 9.804586 seconds 00:11:22.614 00:11:22.614 Latency(us) 00:11:22.614 [2024-12-14T05:00:33.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:22.614 [2024-12-14T05:00:33.497Z] =================================================================================================================== 00:11:22.614 [2024-12-14T05:00:33.497Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87113' 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 87113 00:11:22.614 [2024-12-14 05:00:33.484346] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:22.614 05:00:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 87113 00:11:22.874 [2024-12-14 05:00:33.510823] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.874 05:00:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:22.874 00:11:22.874 real 0m11.681s 00:11:22.874 user 0m14.839s 00:11:22.874 sys 0m1.377s 00:11:22.874 05:00:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.874 05:00:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.874 ************************************ 00:11:22.874 END TEST raid_rebuild_test_io 00:11:22.874 ************************************ 00:11:23.135 05:00:33 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:11:23.135 05:00:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:23.135 05:00:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.135 05:00:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.135 ************************************ 00:11:23.135 START TEST raid_rebuild_test_sb_io 00:11:23.135 ************************************ 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87494 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87494 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 87494 ']' 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:23.135 05:00:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.135 [2024-12-14 05:00:33.912398] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:23.135 [2024-12-14 05:00:33.912628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87494 ] 00:11:23.135 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:23.135 Zero copy mechanism will not be used. 00:11:23.396 [2024-12-14 05:00:34.071739] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.396 [2024-12-14 05:00:34.117246] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.396 [2024-12-14 05:00:34.159598] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.396 [2024-12-14 05:00:34.159628] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.966 BaseBdev1_malloc 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.966 [2024-12-14 05:00:34.757777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:23.966 [2024-12-14 05:00:34.757886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.966 [2024-12-14 05:00:34.757959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:23.966 [2024-12-14 05:00:34.758013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.966 [2024-12-14 05:00:34.760142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.966 [2024-12-14 05:00:34.760241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:23.966 BaseBdev1 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.966 BaseBdev2_malloc 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.966 [2024-12-14 05:00:34.794073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:23.966 [2024-12-14 05:00:34.794126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.966 [2024-12-14 05:00:34.794145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:23.966 [2024-12-14 05:00:34.794153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.966 [2024-12-14 05:00:34.796208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.966 [2024-12-14 05:00:34.796243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:23.966 BaseBdev2 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.966 spare_malloc 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.966 spare_delay 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.966 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.966 [2024-12-14 05:00:34.834673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:23.966 [2024-12-14 05:00:34.834781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.966 [2024-12-14 05:00:34.834831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:23.966 [2024-12-14 05:00:34.834887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.966 [2024-12-14 05:00:34.836998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.967 [2024-12-14 05:00:34.837073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:23.967 spare 00:11:23.967 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.967 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:23.967 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.967 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.967 [2024-12-14 05:00:34.846677] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.226 [2024-12-14 05:00:34.848588] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.226 [2024-12-14 05:00:34.848808] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:24.226 [2024-12-14 05:00:34.848876] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:24.226 [2024-12-14 05:00:34.849131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:24.226 [2024-12-14 05:00:34.849284] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:24.226 [2024-12-14 05:00:34.849300] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:24.226 [2024-12-14 05:00:34.849437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.226 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.226 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:24.226 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.226 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.226 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.226 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.226 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:24.227 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.227 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.227 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.227 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.227 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.227 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.227 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.227 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.227 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.227 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.227 "name": "raid_bdev1", 00:11:24.227 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:24.227 "strip_size_kb": 0, 00:11:24.227 "state": "online", 00:11:24.227 "raid_level": "raid1", 00:11:24.227 "superblock": true, 00:11:24.227 "num_base_bdevs": 2, 00:11:24.227 "num_base_bdevs_discovered": 2, 00:11:24.227 "num_base_bdevs_operational": 2, 00:11:24.227 "base_bdevs_list": [ 00:11:24.227 { 00:11:24.227 "name": "BaseBdev1", 00:11:24.227 "uuid": "9632c312-c536-5e0b-8cd5-8166d6ee68ca", 00:11:24.227 "is_configured": true, 00:11:24.227 "data_offset": 2048, 00:11:24.227 "data_size": 63488 00:11:24.227 }, 00:11:24.227 { 00:11:24.227 "name": "BaseBdev2", 00:11:24.227 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:24.227 "is_configured": true, 00:11:24.227 "data_offset": 2048, 00:11:24.227 "data_size": 63488 00:11:24.227 } 00:11:24.227 ] 00:11:24.227 }' 00:11:24.227 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.227 05:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.486 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:24.486 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:24.486 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.486 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.486 [2024-12-14 05:00:35.278194] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.486 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.486 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:24.486 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.486 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:24.486 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.486 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.486 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.486 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:24.486 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:24.486 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:24.486 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:24.486 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.486 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.746 [2024-12-14 05:00:35.369769] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:24.746 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.746 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:24.746 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.746 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.746 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.746 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.746 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:24.746 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.746 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.746 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.746 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.746 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.746 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.746 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.746 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.746 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.746 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.746 "name": "raid_bdev1", 00:11:24.746 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:24.746 "strip_size_kb": 0, 00:11:24.746 "state": "online", 00:11:24.746 "raid_level": "raid1", 00:11:24.746 "superblock": true, 00:11:24.746 "num_base_bdevs": 2, 00:11:24.746 "num_base_bdevs_discovered": 1, 00:11:24.746 "num_base_bdevs_operational": 1, 00:11:24.746 "base_bdevs_list": [ 00:11:24.746 { 00:11:24.746 "name": null, 00:11:24.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.746 "is_configured": false, 00:11:24.746 "data_offset": 0, 00:11:24.746 "data_size": 63488 00:11:24.746 }, 00:11:24.746 { 00:11:24.746 "name": "BaseBdev2", 00:11:24.746 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:24.746 "is_configured": true, 00:11:24.746 "data_offset": 2048, 00:11:24.746 "data_size": 63488 00:11:24.746 } 00:11:24.746 ] 00:11:24.746 }' 00:11:24.746 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.746 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.746 [2024-12-14 05:00:35.459577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:24.746 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:24.746 Zero copy mechanism will not be used. 00:11:24.746 Running I/O for 60 seconds... 00:11:25.006 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:25.006 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.007 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.007 [2024-12-14 05:00:35.817798] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:25.007 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.007 05:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:25.007 [2024-12-14 05:00:35.848646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:25.007 [2024-12-14 05:00:35.850585] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:25.267 [2024-12-14 05:00:35.968339] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:25.267 [2024-12-14 05:00:35.968800] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:25.267 [2024-12-14 05:00:36.094542] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:25.267 [2024-12-14 05:00:36.094846] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:25.527 [2024-12-14 05:00:36.316864] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:25.527 [2024-12-14 05:00:36.317315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:25.787 210.00 IOPS, 630.00 MiB/s [2024-12-14T05:00:36.670Z] [2024-12-14 05:00:36.544503] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:26.046 [2024-12-14 05:00:36.776246] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:26.047 [2024-12-14 05:00:36.776578] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:26.047 05:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:26.047 05:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:26.047 05:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:26.047 05:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:26.047 05:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:26.047 05:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.047 05:00:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.047 05:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.047 05:00:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.047 05:00:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.047 [2024-12-14 05:00:36.895349] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:26.047 05:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:26.047 "name": "raid_bdev1", 00:11:26.047 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:26.047 "strip_size_kb": 0, 00:11:26.047 "state": "online", 00:11:26.047 "raid_level": "raid1", 00:11:26.047 "superblock": true, 00:11:26.047 "num_base_bdevs": 2, 00:11:26.047 "num_base_bdevs_discovered": 2, 00:11:26.047 "num_base_bdevs_operational": 2, 00:11:26.047 "process": { 00:11:26.047 "type": "rebuild", 00:11:26.047 "target": "spare", 00:11:26.047 "progress": { 00:11:26.047 "blocks": 14336, 00:11:26.047 "percent": 22 00:11:26.047 } 00:11:26.047 }, 00:11:26.047 "base_bdevs_list": [ 00:11:26.047 { 00:11:26.047 "name": "spare", 00:11:26.047 "uuid": "bd88210e-4d95-5137-a211-bf02dd9549f4", 00:11:26.047 "is_configured": true, 00:11:26.047 "data_offset": 2048, 00:11:26.047 "data_size": 63488 00:11:26.047 }, 00:11:26.047 { 00:11:26.047 "name": "BaseBdev2", 00:11:26.047 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:26.047 "is_configured": true, 00:11:26.047 "data_offset": 2048, 00:11:26.047 "data_size": 63488 00:11:26.047 } 00:11:26.047 ] 00:11:26.047 }' 00:11:26.047 05:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:26.307 05:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:26.307 05:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:26.307 05:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:26.307 05:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:26.307 05:00:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.307 05:00:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.307 [2024-12-14 05:00:36.979893] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:26.307 [2024-12-14 05:00:37.002361] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:26.307 [2024-12-14 05:00:37.002606] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:26.307 [2024-12-14 05:00:37.103812] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:26.307 [2024-12-14 05:00:37.105931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.307 [2024-12-14 05:00:37.106009] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:26.307 [2024-12-14 05:00:37.106024] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:26.307 [2024-12-14 05:00:37.117571] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:11:26.307 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.307 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:26.307 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.307 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.307 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.307 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.307 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:26.307 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.307 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.307 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.307 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.307 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.307 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.307 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.307 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.307 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.307 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.307 "name": "raid_bdev1", 00:11:26.307 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:26.307 "strip_size_kb": 0, 00:11:26.307 "state": "online", 00:11:26.307 "raid_level": "raid1", 00:11:26.307 "superblock": true, 00:11:26.307 "num_base_bdevs": 2, 00:11:26.307 "num_base_bdevs_discovered": 1, 00:11:26.307 "num_base_bdevs_operational": 1, 00:11:26.307 "base_bdevs_list": [ 00:11:26.307 { 00:11:26.307 "name": null, 00:11:26.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.307 "is_configured": false, 00:11:26.307 "data_offset": 0, 00:11:26.307 "data_size": 63488 00:11:26.307 }, 00:11:26.307 { 00:11:26.307 "name": "BaseBdev2", 00:11:26.307 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:26.307 "is_configured": true, 00:11:26.307 "data_offset": 2048, 00:11:26.307 "data_size": 63488 00:11:26.307 } 00:11:26.307 ] 00:11:26.307 }' 00:11:26.307 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.307 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.877 212.00 IOPS, 636.00 MiB/s [2024-12-14T05:00:37.761Z] 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:26.878 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:26.878 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:26.878 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:26.878 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:26.878 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.878 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.878 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.878 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.878 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.878 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:26.878 "name": "raid_bdev1", 00:11:26.878 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:26.878 "strip_size_kb": 0, 00:11:26.878 "state": "online", 00:11:26.878 "raid_level": "raid1", 00:11:26.878 "superblock": true, 00:11:26.878 "num_base_bdevs": 2, 00:11:26.878 "num_base_bdevs_discovered": 1, 00:11:26.878 "num_base_bdevs_operational": 1, 00:11:26.878 "base_bdevs_list": [ 00:11:26.878 { 00:11:26.878 "name": null, 00:11:26.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.878 "is_configured": false, 00:11:26.878 "data_offset": 0, 00:11:26.878 "data_size": 63488 00:11:26.878 }, 00:11:26.878 { 00:11:26.878 "name": "BaseBdev2", 00:11:26.878 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:26.878 "is_configured": true, 00:11:26.878 "data_offset": 2048, 00:11:26.878 "data_size": 63488 00:11:26.878 } 00:11:26.878 ] 00:11:26.878 }' 00:11:26.878 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:26.878 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:26.878 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:26.878 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:26.878 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:26.878 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.878 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.878 [2024-12-14 05:00:37.714577] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:26.878 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.878 05:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:26.878 [2024-12-14 05:00:37.750267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:26.878 [2024-12-14 05:00:37.752262] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:27.138 [2024-12-14 05:00:37.870382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:27.138 [2024-12-14 05:00:37.870893] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:27.397 [2024-12-14 05:00:38.071961] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:27.397 [2024-12-14 05:00:38.072230] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:27.657 [2024-12-14 05:00:38.289311] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:27.657 197.33 IOPS, 592.00 MiB/s [2024-12-14T05:00:38.540Z] [2024-12-14 05:00:38.509165] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:27.657 [2024-12-14 05:00:38.509406] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:27.925 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:27.925 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:27.925 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:27.925 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:27.925 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:27.925 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.925 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.925 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.925 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.925 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.925 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:27.925 "name": "raid_bdev1", 00:11:27.925 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:27.925 "strip_size_kb": 0, 00:11:27.925 "state": "online", 00:11:27.925 "raid_level": "raid1", 00:11:27.925 "superblock": true, 00:11:27.925 "num_base_bdevs": 2, 00:11:27.925 "num_base_bdevs_discovered": 2, 00:11:27.925 "num_base_bdevs_operational": 2, 00:11:27.925 "process": { 00:11:27.925 "type": "rebuild", 00:11:27.925 "target": "spare", 00:11:27.925 "progress": { 00:11:27.925 "blocks": 12288, 00:11:27.925 "percent": 19 00:11:27.925 } 00:11:27.925 }, 00:11:27.925 "base_bdevs_list": [ 00:11:27.925 { 00:11:27.925 "name": "spare", 00:11:27.925 "uuid": "bd88210e-4d95-5137-a211-bf02dd9549f4", 00:11:27.925 "is_configured": true, 00:11:27.925 "data_offset": 2048, 00:11:27.925 "data_size": 63488 00:11:27.925 }, 00:11:27.925 { 00:11:27.925 "name": "BaseBdev2", 00:11:27.925 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:27.925 "is_configured": true, 00:11:27.925 "data_offset": 2048, 00:11:27.925 "data_size": 63488 00:11:27.925 } 00:11:27.925 ] 00:11:27.925 }' 00:11:27.925 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:28.200 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:28.200 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:28.200 [2024-12-14 05:00:38.841943] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:28.200 [2024-12-14 05:00:38.842427] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:28.200 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:28.200 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:28.200 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:28.200 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:28.201 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:28.201 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:28.201 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:28.201 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=329 00:11:28.201 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:28.201 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:28.201 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:28.201 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:28.201 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:28.201 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:28.201 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.201 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.201 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.201 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.201 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.201 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:28.201 "name": "raid_bdev1", 00:11:28.201 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:28.201 "strip_size_kb": 0, 00:11:28.201 "state": "online", 00:11:28.201 "raid_level": "raid1", 00:11:28.201 "superblock": true, 00:11:28.201 "num_base_bdevs": 2, 00:11:28.201 "num_base_bdevs_discovered": 2, 00:11:28.201 "num_base_bdevs_operational": 2, 00:11:28.201 "process": { 00:11:28.201 "type": "rebuild", 00:11:28.201 "target": "spare", 00:11:28.201 "progress": { 00:11:28.201 "blocks": 14336, 00:11:28.201 "percent": 22 00:11:28.201 } 00:11:28.201 }, 00:11:28.201 "base_bdevs_list": [ 00:11:28.201 { 00:11:28.201 "name": "spare", 00:11:28.201 "uuid": "bd88210e-4d95-5137-a211-bf02dd9549f4", 00:11:28.201 "is_configured": true, 00:11:28.201 "data_offset": 2048, 00:11:28.201 "data_size": 63488 00:11:28.201 }, 00:11:28.201 { 00:11:28.201 "name": "BaseBdev2", 00:11:28.201 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:28.201 "is_configured": true, 00:11:28.201 "data_offset": 2048, 00:11:28.201 "data_size": 63488 00:11:28.201 } 00:11:28.201 ] 00:11:28.201 }' 00:11:28.201 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:28.201 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:28.201 05:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:28.201 05:00:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:28.201 05:00:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:28.201 [2024-12-14 05:00:39.049396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:28.201 [2024-12-14 05:00:39.049671] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:28.487 [2024-12-14 05:00:39.354262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:28.764 163.50 IOPS, 490.50 MiB/s [2024-12-14T05:00:39.647Z] [2024-12-14 05:00:39.582752] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:29.334 [2024-12-14 05:00:39.918402] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:29.334 05:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:29.334 05:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:29.334 05:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:29.334 05:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:29.334 05:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:29.334 05:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:29.334 05:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.334 05:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.334 05:00:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.334 05:00:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:29.334 [2024-12-14 05:00:40.038740] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:29.334 05:00:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.334 05:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:29.334 "name": "raid_bdev1", 00:11:29.334 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:29.334 "strip_size_kb": 0, 00:11:29.334 "state": "online", 00:11:29.334 "raid_level": "raid1", 00:11:29.334 "superblock": true, 00:11:29.334 "num_base_bdevs": 2, 00:11:29.334 "num_base_bdevs_discovered": 2, 00:11:29.334 "num_base_bdevs_operational": 2, 00:11:29.334 "process": { 00:11:29.334 "type": "rebuild", 00:11:29.334 "target": "spare", 00:11:29.334 "progress": { 00:11:29.334 "blocks": 28672, 00:11:29.334 "percent": 45 00:11:29.334 } 00:11:29.334 }, 00:11:29.334 "base_bdevs_list": [ 00:11:29.334 { 00:11:29.334 "name": "spare", 00:11:29.334 "uuid": "bd88210e-4d95-5137-a211-bf02dd9549f4", 00:11:29.334 "is_configured": true, 00:11:29.334 "data_offset": 2048, 00:11:29.334 "data_size": 63488 00:11:29.334 }, 00:11:29.334 { 00:11:29.334 "name": "BaseBdev2", 00:11:29.334 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:29.334 "is_configured": true, 00:11:29.334 "data_offset": 2048, 00:11:29.334 "data_size": 63488 00:11:29.334 } 00:11:29.334 ] 00:11:29.334 }' 00:11:29.334 05:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:29.334 05:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:29.334 05:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:29.334 05:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:29.334 05:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:29.853 140.00 IOPS, 420.00 MiB/s [2024-12-14T05:00:40.736Z] [2024-12-14 05:00:40.720515] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:30.423 [2024-12-14 05:00:41.041877] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:30.423 05:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:30.423 05:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:30.423 05:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:30.423 05:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:30.423 05:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:30.423 05:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:30.423 05:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.423 05:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.423 05:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.423 05:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.423 05:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.423 05:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:30.423 "name": "raid_bdev1", 00:11:30.423 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:30.423 "strip_size_kb": 0, 00:11:30.423 "state": "online", 00:11:30.423 "raid_level": "raid1", 00:11:30.423 "superblock": true, 00:11:30.423 "num_base_bdevs": 2, 00:11:30.423 "num_base_bdevs_discovered": 2, 00:11:30.423 "num_base_bdevs_operational": 2, 00:11:30.423 "process": { 00:11:30.423 "type": "rebuild", 00:11:30.423 "target": "spare", 00:11:30.423 "progress": { 00:11:30.423 "blocks": 47104, 00:11:30.423 "percent": 74 00:11:30.423 } 00:11:30.423 }, 00:11:30.423 "base_bdevs_list": [ 00:11:30.423 { 00:11:30.423 "name": "spare", 00:11:30.423 "uuid": "bd88210e-4d95-5137-a211-bf02dd9549f4", 00:11:30.423 "is_configured": true, 00:11:30.423 "data_offset": 2048, 00:11:30.423 "data_size": 63488 00:11:30.423 }, 00:11:30.423 { 00:11:30.423 "name": "BaseBdev2", 00:11:30.423 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:30.423 "is_configured": true, 00:11:30.423 "data_offset": 2048, 00:11:30.423 "data_size": 63488 00:11:30.423 } 00:11:30.423 ] 00:11:30.423 }' 00:11:30.423 05:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:30.423 05:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:30.423 05:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:30.683 05:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:30.683 05:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:30.683 [2024-12-14 05:00:41.370078] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:11:30.943 123.83 IOPS, 371.50 MiB/s [2024-12-14T05:00:41.826Z] [2024-12-14 05:00:41.578149] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:11:30.943 [2024-12-14 05:00:41.578452] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:11:31.512 [2024-12-14 05:00:42.140231] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:31.512 [2024-12-14 05:00:42.246354] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:31.512 [2024-12-14 05:00:42.248665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.512 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:31.512 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:31.512 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:31.512 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:31.512 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:31.512 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:31.512 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.512 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.512 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.512 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.512 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.512 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:31.512 "name": "raid_bdev1", 00:11:31.512 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:31.512 "strip_size_kb": 0, 00:11:31.512 "state": "online", 00:11:31.512 "raid_level": "raid1", 00:11:31.512 "superblock": true, 00:11:31.512 "num_base_bdevs": 2, 00:11:31.512 "num_base_bdevs_discovered": 2, 00:11:31.512 "num_base_bdevs_operational": 2, 00:11:31.512 "base_bdevs_list": [ 00:11:31.512 { 00:11:31.512 "name": "spare", 00:11:31.512 "uuid": "bd88210e-4d95-5137-a211-bf02dd9549f4", 00:11:31.512 "is_configured": true, 00:11:31.512 "data_offset": 2048, 00:11:31.512 "data_size": 63488 00:11:31.512 }, 00:11:31.512 { 00:11:31.512 "name": "BaseBdev2", 00:11:31.512 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:31.512 "is_configured": true, 00:11:31.512 "data_offset": 2048, 00:11:31.512 "data_size": 63488 00:11:31.512 } 00:11:31.512 ] 00:11:31.512 }' 00:11:31.772 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:31.772 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:31.772 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:31.772 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:31.772 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:11:31.772 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:31.772 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:31.772 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:31.772 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:31.772 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:31.772 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.772 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.772 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.772 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.772 110.57 IOPS, 331.71 MiB/s [2024-12-14T05:00:42.655Z] 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.772 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:31.772 "name": "raid_bdev1", 00:11:31.772 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:31.772 "strip_size_kb": 0, 00:11:31.772 "state": "online", 00:11:31.772 "raid_level": "raid1", 00:11:31.772 "superblock": true, 00:11:31.772 "num_base_bdevs": 2, 00:11:31.772 "num_base_bdevs_discovered": 2, 00:11:31.772 "num_base_bdevs_operational": 2, 00:11:31.772 "base_bdevs_list": [ 00:11:31.772 { 00:11:31.772 "name": "spare", 00:11:31.772 "uuid": "bd88210e-4d95-5137-a211-bf02dd9549f4", 00:11:31.772 "is_configured": true, 00:11:31.772 "data_offset": 2048, 00:11:31.772 "data_size": 63488 00:11:31.772 }, 00:11:31.772 { 00:11:31.772 "name": "BaseBdev2", 00:11:31.772 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:31.772 "is_configured": true, 00:11:31.772 "data_offset": 2048, 00:11:31.772 "data_size": 63488 00:11:31.772 } 00:11:31.772 ] 00:11:31.773 }' 00:11:31.773 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:31.773 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:31.773 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:31.773 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:31.773 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:31.773 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.773 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.773 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.773 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.773 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:31.773 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.773 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.773 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.773 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.773 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.773 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.773 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.773 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.773 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.033 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.033 "name": "raid_bdev1", 00:11:32.033 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:32.033 "strip_size_kb": 0, 00:11:32.033 "state": "online", 00:11:32.033 "raid_level": "raid1", 00:11:32.033 "superblock": true, 00:11:32.033 "num_base_bdevs": 2, 00:11:32.033 "num_base_bdevs_discovered": 2, 00:11:32.033 "num_base_bdevs_operational": 2, 00:11:32.033 "base_bdevs_list": [ 00:11:32.033 { 00:11:32.033 "name": "spare", 00:11:32.033 "uuid": "bd88210e-4d95-5137-a211-bf02dd9549f4", 00:11:32.033 "is_configured": true, 00:11:32.033 "data_offset": 2048, 00:11:32.033 "data_size": 63488 00:11:32.033 }, 00:11:32.033 { 00:11:32.033 "name": "BaseBdev2", 00:11:32.033 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:32.033 "is_configured": true, 00:11:32.033 "data_offset": 2048, 00:11:32.033 "data_size": 63488 00:11:32.033 } 00:11:32.033 ] 00:11:32.033 }' 00:11:32.033 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.033 05:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:32.293 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:32.293 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.293 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:32.293 [2024-12-14 05:00:43.072529] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:32.293 [2024-12-14 05:00:43.072603] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.293 00:11:32.293 Latency(us) 00:11:32.293 [2024-12-14T05:00:43.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:32.293 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:32.293 raid_bdev1 : 7.66 104.03 312.08 0.00 0.00 12440.71 271.87 109894.43 00:11:32.293 [2024-12-14T05:00:43.176Z] =================================================================================================================== 00:11:32.293 [2024-12-14T05:00:43.176Z] Total : 104.03 312.08 0.00 0.00 12440.71 271.87 109894.43 00:11:32.293 [2024-12-14 05:00:43.111563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.293 [2024-12-14 05:00:43.111639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.293 [2024-12-14 05:00:43.111759] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:11:32.293 "results": [ 00:11:32.293 { 00:11:32.293 "job": "raid_bdev1", 00:11:32.293 "core_mask": "0x1", 00:11:32.293 "workload": "randrw", 00:11:32.293 "percentage": 50, 00:11:32.293 "status": "finished", 00:11:32.293 "queue_depth": 2, 00:11:32.293 "io_size": 3145728, 00:11:32.293 "runtime": 7.661478, 00:11:32.293 "iops": 104.02692535304546, 00:11:32.293 "mibps": 312.0807760591364, 00:11:32.293 "io_failed": 0, 00:11:32.293 "io_timeout": 0, 00:11:32.293 "avg_latency_us": 12440.708201607558, 00:11:32.293 "min_latency_us": 271.87423580786026, 00:11:32.293 "max_latency_us": 109894.42794759825 00:11:32.293 } 00:11:32.293 ], 00:11:32.293 "core_count": 1 00:11:32.293 } 00:11:32.293 ee all in destruct 00:11:32.293 [2024-12-14 05:00:43.111810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:32.293 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.293 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.293 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:32.293 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.293 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:32.293 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.293 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:32.293 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:32.293 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:32.293 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:32.293 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:32.293 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:32.293 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:32.293 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:32.293 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:32.293 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:32.293 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:32.293 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:32.293 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:32.553 /dev/nbd0 00:11:32.553 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.554 1+0 records in 00:11:32.554 1+0 records out 00:11:32.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403636 s, 10.1 MB/s 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:32.554 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:32.814 /dev/nbd1 00:11:32.814 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:32.814 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:32.814 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:32.814 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:11:32.814 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:32.814 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:32.814 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:32.814 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:11:32.814 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:32.814 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:32.814 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.814 1+0 records in 00:11:32.814 1+0 records out 00:11:32.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373176 s, 11.0 MB/s 00:11:32.814 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.814 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:11:32.814 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.814 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:32.814 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:11:32.814 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:32.814 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:32.814 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:33.074 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:33.074 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:33.074 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:33.074 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:33.074 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:33.074 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.074 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:33.074 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:33.074 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:33.074 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:33.074 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.074 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.074 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:33.074 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:33.074 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.074 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:33.074 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:33.074 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:33.074 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:33.074 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:33.074 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.074 05:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:33.334 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:33.334 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:33.334 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:33.334 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.334 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.334 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:33.334 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:33.334 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.334 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:33.334 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:33.334 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.334 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.334 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.334 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:33.334 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.334 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.334 [2024-12-14 05:00:44.161723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:33.334 [2024-12-14 05:00:44.161841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.334 [2024-12-14 05:00:44.161881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:33.334 [2024-12-14 05:00:44.161930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.334 [2024-12-14 05:00:44.164129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.334 [2024-12-14 05:00:44.164176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:33.334 [2024-12-14 05:00:44.164263] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:33.334 [2024-12-14 05:00:44.164311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:33.334 [2024-12-14 05:00:44.164420] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:33.334 spare 00:11:33.334 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.334 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:33.334 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.334 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.594 [2024-12-14 05:00:44.264325] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:11:33.594 [2024-12-14 05:00:44.264403] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:33.594 [2024-12-14 05:00:44.264733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:11:33.594 [2024-12-14 05:00:44.264930] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:11:33.594 [2024-12-14 05:00:44.264983] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:11:33.594 [2024-12-14 05:00:44.265202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.594 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.594 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:33.594 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.594 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.594 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.594 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.594 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:33.594 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.594 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.594 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.594 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.594 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.594 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.594 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.594 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.594 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.594 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.594 "name": "raid_bdev1", 00:11:33.594 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:33.594 "strip_size_kb": 0, 00:11:33.594 "state": "online", 00:11:33.594 "raid_level": "raid1", 00:11:33.594 "superblock": true, 00:11:33.594 "num_base_bdevs": 2, 00:11:33.594 "num_base_bdevs_discovered": 2, 00:11:33.594 "num_base_bdevs_operational": 2, 00:11:33.594 "base_bdevs_list": [ 00:11:33.594 { 00:11:33.594 "name": "spare", 00:11:33.594 "uuid": "bd88210e-4d95-5137-a211-bf02dd9549f4", 00:11:33.594 "is_configured": true, 00:11:33.594 "data_offset": 2048, 00:11:33.594 "data_size": 63488 00:11:33.594 }, 00:11:33.594 { 00:11:33.594 "name": "BaseBdev2", 00:11:33.594 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:33.594 "is_configured": true, 00:11:33.594 "data_offset": 2048, 00:11:33.594 "data_size": 63488 00:11:33.594 } 00:11:33.594 ] 00:11:33.594 }' 00:11:33.595 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.595 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.854 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:33.854 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:33.854 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:33.854 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:33.854 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:33.854 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.854 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.854 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.854 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:34.114 "name": "raid_bdev1", 00:11:34.114 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:34.114 "strip_size_kb": 0, 00:11:34.114 "state": "online", 00:11:34.114 "raid_level": "raid1", 00:11:34.114 "superblock": true, 00:11:34.114 "num_base_bdevs": 2, 00:11:34.114 "num_base_bdevs_discovered": 2, 00:11:34.114 "num_base_bdevs_operational": 2, 00:11:34.114 "base_bdevs_list": [ 00:11:34.114 { 00:11:34.114 "name": "spare", 00:11:34.114 "uuid": "bd88210e-4d95-5137-a211-bf02dd9549f4", 00:11:34.114 "is_configured": true, 00:11:34.114 "data_offset": 2048, 00:11:34.114 "data_size": 63488 00:11:34.114 }, 00:11:34.114 { 00:11:34.114 "name": "BaseBdev2", 00:11:34.114 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:34.114 "is_configured": true, 00:11:34.114 "data_offset": 2048, 00:11:34.114 "data_size": 63488 00:11:34.114 } 00:11:34.114 ] 00:11:34.114 }' 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.114 [2024-12-14 05:00:44.888545] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.114 "name": "raid_bdev1", 00:11:34.114 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:34.114 "strip_size_kb": 0, 00:11:34.114 "state": "online", 00:11:34.114 "raid_level": "raid1", 00:11:34.114 "superblock": true, 00:11:34.114 "num_base_bdevs": 2, 00:11:34.114 "num_base_bdevs_discovered": 1, 00:11:34.114 "num_base_bdevs_operational": 1, 00:11:34.114 "base_bdevs_list": [ 00:11:34.114 { 00:11:34.114 "name": null, 00:11:34.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.114 "is_configured": false, 00:11:34.114 "data_offset": 0, 00:11:34.114 "data_size": 63488 00:11:34.114 }, 00:11:34.114 { 00:11:34.114 "name": "BaseBdev2", 00:11:34.114 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:34.114 "is_configured": true, 00:11:34.114 "data_offset": 2048, 00:11:34.114 "data_size": 63488 00:11:34.114 } 00:11:34.114 ] 00:11:34.114 }' 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.114 05:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.684 05:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:34.684 05:00:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.684 05:00:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.684 [2024-12-14 05:00:45.371784] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:34.684 [2024-12-14 05:00:45.372010] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:34.684 [2024-12-14 05:00:45.372072] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:34.684 [2024-12-14 05:00:45.372185] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:34.684 [2024-12-14 05:00:45.376706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:11:34.684 05:00:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.684 05:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:34.684 [2024-12-14 05:00:45.378531] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:35.624 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:35.624 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:35.624 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:35.624 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:35.624 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:35.624 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.624 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.624 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.624 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.624 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.624 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:35.624 "name": "raid_bdev1", 00:11:35.624 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:35.624 "strip_size_kb": 0, 00:11:35.624 "state": "online", 00:11:35.624 "raid_level": "raid1", 00:11:35.624 "superblock": true, 00:11:35.624 "num_base_bdevs": 2, 00:11:35.624 "num_base_bdevs_discovered": 2, 00:11:35.624 "num_base_bdevs_operational": 2, 00:11:35.624 "process": { 00:11:35.624 "type": "rebuild", 00:11:35.624 "target": "spare", 00:11:35.624 "progress": { 00:11:35.624 "blocks": 20480, 00:11:35.624 "percent": 32 00:11:35.624 } 00:11:35.624 }, 00:11:35.624 "base_bdevs_list": [ 00:11:35.624 { 00:11:35.624 "name": "spare", 00:11:35.624 "uuid": "bd88210e-4d95-5137-a211-bf02dd9549f4", 00:11:35.624 "is_configured": true, 00:11:35.624 "data_offset": 2048, 00:11:35.624 "data_size": 63488 00:11:35.624 }, 00:11:35.624 { 00:11:35.624 "name": "BaseBdev2", 00:11:35.624 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:35.624 "is_configured": true, 00:11:35.624 "data_offset": 2048, 00:11:35.624 "data_size": 63488 00:11:35.624 } 00:11:35.624 ] 00:11:35.624 }' 00:11:35.624 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:35.624 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:35.624 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.884 [2024-12-14 05:00:46.542724] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:35.884 [2024-12-14 05:00:46.582597] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:35.884 [2024-12-14 05:00:46.582696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.884 [2024-12-14 05:00:46.582767] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:35.884 [2024-12-14 05:00:46.582796] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.884 "name": "raid_bdev1", 00:11:35.884 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:35.884 "strip_size_kb": 0, 00:11:35.884 "state": "online", 00:11:35.884 "raid_level": "raid1", 00:11:35.884 "superblock": true, 00:11:35.884 "num_base_bdevs": 2, 00:11:35.884 "num_base_bdevs_discovered": 1, 00:11:35.884 "num_base_bdevs_operational": 1, 00:11:35.884 "base_bdevs_list": [ 00:11:35.884 { 00:11:35.884 "name": null, 00:11:35.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.884 "is_configured": false, 00:11:35.884 "data_offset": 0, 00:11:35.884 "data_size": 63488 00:11:35.884 }, 00:11:35.884 { 00:11:35.884 "name": "BaseBdev2", 00:11:35.884 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:35.884 "is_configured": true, 00:11:35.884 "data_offset": 2048, 00:11:35.884 "data_size": 63488 00:11:35.884 } 00:11:35.884 ] 00:11:35.884 }' 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.884 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.144 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:36.144 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.144 05:00:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.144 [2024-12-14 05:00:47.006570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:36.144 [2024-12-14 05:00:47.006670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.144 [2024-12-14 05:00:47.006697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:36.144 [2024-12-14 05:00:47.006706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.144 [2024-12-14 05:00:47.007145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.144 [2024-12-14 05:00:47.007165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:36.144 [2024-12-14 05:00:47.007280] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:36.144 [2024-12-14 05:00:47.007293] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:36.144 [2024-12-14 05:00:47.007303] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:36.144 [2024-12-14 05:00:47.007323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:36.144 spare 00:11:36.144 [2024-12-14 05:00:47.011479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:11:36.144 05:00:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.144 05:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:36.144 [2024-12-14 05:00:47.013319] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:37.525 "name": "raid_bdev1", 00:11:37.525 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:37.525 "strip_size_kb": 0, 00:11:37.525 "state": "online", 00:11:37.525 "raid_level": "raid1", 00:11:37.525 "superblock": true, 00:11:37.525 "num_base_bdevs": 2, 00:11:37.525 "num_base_bdevs_discovered": 2, 00:11:37.525 "num_base_bdevs_operational": 2, 00:11:37.525 "process": { 00:11:37.525 "type": "rebuild", 00:11:37.525 "target": "spare", 00:11:37.525 "progress": { 00:11:37.525 "blocks": 20480, 00:11:37.525 "percent": 32 00:11:37.525 } 00:11:37.525 }, 00:11:37.525 "base_bdevs_list": [ 00:11:37.525 { 00:11:37.525 "name": "spare", 00:11:37.525 "uuid": "bd88210e-4d95-5137-a211-bf02dd9549f4", 00:11:37.525 "is_configured": true, 00:11:37.525 "data_offset": 2048, 00:11:37.525 "data_size": 63488 00:11:37.525 }, 00:11:37.525 { 00:11:37.525 "name": "BaseBdev2", 00:11:37.525 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:37.525 "is_configured": true, 00:11:37.525 "data_offset": 2048, 00:11:37.525 "data_size": 63488 00:11:37.525 } 00:11:37.525 ] 00:11:37.525 }' 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.525 [2024-12-14 05:00:48.165649] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:37.525 [2024-12-14 05:00:48.217489] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:37.525 [2024-12-14 05:00:48.217607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.525 [2024-12-14 05:00:48.217643] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:37.525 [2024-12-14 05:00:48.217670] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.525 "name": "raid_bdev1", 00:11:37.525 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:37.525 "strip_size_kb": 0, 00:11:37.525 "state": "online", 00:11:37.525 "raid_level": "raid1", 00:11:37.525 "superblock": true, 00:11:37.525 "num_base_bdevs": 2, 00:11:37.525 "num_base_bdevs_discovered": 1, 00:11:37.525 "num_base_bdevs_operational": 1, 00:11:37.525 "base_bdevs_list": [ 00:11:37.525 { 00:11:37.525 "name": null, 00:11:37.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.525 "is_configured": false, 00:11:37.525 "data_offset": 0, 00:11:37.525 "data_size": 63488 00:11:37.525 }, 00:11:37.525 { 00:11:37.525 "name": "BaseBdev2", 00:11:37.525 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:37.525 "is_configured": true, 00:11:37.525 "data_offset": 2048, 00:11:37.525 "data_size": 63488 00:11:37.525 } 00:11:37.525 ] 00:11:37.525 }' 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.525 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.785 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:37.785 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:37.785 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:37.785 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:37.785 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:37.785 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.785 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.785 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.785 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.785 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.045 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:38.045 "name": "raid_bdev1", 00:11:38.045 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:38.045 "strip_size_kb": 0, 00:11:38.045 "state": "online", 00:11:38.045 "raid_level": "raid1", 00:11:38.045 "superblock": true, 00:11:38.045 "num_base_bdevs": 2, 00:11:38.045 "num_base_bdevs_discovered": 1, 00:11:38.045 "num_base_bdevs_operational": 1, 00:11:38.045 "base_bdevs_list": [ 00:11:38.045 { 00:11:38.045 "name": null, 00:11:38.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.045 "is_configured": false, 00:11:38.045 "data_offset": 0, 00:11:38.045 "data_size": 63488 00:11:38.045 }, 00:11:38.045 { 00:11:38.045 "name": "BaseBdev2", 00:11:38.045 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:38.045 "is_configured": true, 00:11:38.045 "data_offset": 2048, 00:11:38.045 "data_size": 63488 00:11:38.045 } 00:11:38.045 ] 00:11:38.045 }' 00:11:38.045 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:38.045 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:38.045 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:38.045 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:38.045 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:38.045 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.045 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.045 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.045 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:38.045 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.045 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.045 [2024-12-14 05:00:48.785010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:38.045 [2024-12-14 05:00:48.785127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.045 [2024-12-14 05:00:48.785182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:38.045 [2024-12-14 05:00:48.785228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.045 [2024-12-14 05:00:48.785679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.045 [2024-12-14 05:00:48.785745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:38.045 [2024-12-14 05:00:48.785862] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:38.045 [2024-12-14 05:00:48.785913] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:38.045 [2024-12-14 05:00:48.785984] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:38.045 [2024-12-14 05:00:48.786051] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:38.045 BaseBdev1 00:11:38.045 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.045 05:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:38.985 05:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:38.985 05:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.985 05:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.985 05:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.985 05:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.985 05:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:38.985 05:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.985 05:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.985 05:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.985 05:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.985 05:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.985 05:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.985 05:00:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.985 05:00:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.985 05:00:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.985 05:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.985 "name": "raid_bdev1", 00:11:38.985 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:38.985 "strip_size_kb": 0, 00:11:38.985 "state": "online", 00:11:38.985 "raid_level": "raid1", 00:11:38.985 "superblock": true, 00:11:38.985 "num_base_bdevs": 2, 00:11:38.985 "num_base_bdevs_discovered": 1, 00:11:38.985 "num_base_bdevs_operational": 1, 00:11:38.985 "base_bdevs_list": [ 00:11:38.985 { 00:11:38.985 "name": null, 00:11:38.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.985 "is_configured": false, 00:11:38.985 "data_offset": 0, 00:11:38.985 "data_size": 63488 00:11:38.985 }, 00:11:38.985 { 00:11:38.985 "name": "BaseBdev2", 00:11:38.985 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:38.985 "is_configured": true, 00:11:38.985 "data_offset": 2048, 00:11:38.985 "data_size": 63488 00:11:38.985 } 00:11:38.985 ] 00:11:38.985 }' 00:11:38.985 05:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.985 05:00:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.554 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:39.555 "name": "raid_bdev1", 00:11:39.555 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:39.555 "strip_size_kb": 0, 00:11:39.555 "state": "online", 00:11:39.555 "raid_level": "raid1", 00:11:39.555 "superblock": true, 00:11:39.555 "num_base_bdevs": 2, 00:11:39.555 "num_base_bdevs_discovered": 1, 00:11:39.555 "num_base_bdevs_operational": 1, 00:11:39.555 "base_bdevs_list": [ 00:11:39.555 { 00:11:39.555 "name": null, 00:11:39.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.555 "is_configured": false, 00:11:39.555 "data_offset": 0, 00:11:39.555 "data_size": 63488 00:11:39.555 }, 00:11:39.555 { 00:11:39.555 "name": "BaseBdev2", 00:11:39.555 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:39.555 "is_configured": true, 00:11:39.555 "data_offset": 2048, 00:11:39.555 "data_size": 63488 00:11:39.555 } 00:11:39.555 ] 00:11:39.555 }' 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.555 [2024-12-14 05:00:50.326643] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.555 [2024-12-14 05:00:50.326863] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:39.555 [2024-12-14 05:00:50.326925] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:39.555 request: 00:11:39.555 { 00:11:39.555 "base_bdev": "BaseBdev1", 00:11:39.555 "raid_bdev": "raid_bdev1", 00:11:39.555 "method": "bdev_raid_add_base_bdev", 00:11:39.555 "req_id": 1 00:11:39.555 } 00:11:39.555 Got JSON-RPC error response 00:11:39.555 response: 00:11:39.555 { 00:11:39.555 "code": -22, 00:11:39.555 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:39.555 } 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:39.555 05:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:40.494 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:40.494 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.494 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.494 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.494 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.494 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:40.494 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.494 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.494 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.494 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.494 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.494 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.494 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.494 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.494 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.754 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.754 "name": "raid_bdev1", 00:11:40.754 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:40.754 "strip_size_kb": 0, 00:11:40.754 "state": "online", 00:11:40.754 "raid_level": "raid1", 00:11:40.754 "superblock": true, 00:11:40.754 "num_base_bdevs": 2, 00:11:40.754 "num_base_bdevs_discovered": 1, 00:11:40.754 "num_base_bdevs_operational": 1, 00:11:40.754 "base_bdevs_list": [ 00:11:40.754 { 00:11:40.754 "name": null, 00:11:40.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.754 "is_configured": false, 00:11:40.754 "data_offset": 0, 00:11:40.754 "data_size": 63488 00:11:40.754 }, 00:11:40.754 { 00:11:40.754 "name": "BaseBdev2", 00:11:40.754 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:40.754 "is_configured": true, 00:11:40.754 "data_offset": 2048, 00:11:40.754 "data_size": 63488 00:11:40.754 } 00:11:40.754 ] 00:11:40.754 }' 00:11:40.754 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.754 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.014 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:41.014 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:41.014 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:41.014 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:41.014 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:41.014 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.014 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.014 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.014 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.014 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.014 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:41.014 "name": "raid_bdev1", 00:11:41.014 "uuid": "b564bcae-3352-4051-99ed-3b231bb05c95", 00:11:41.014 "strip_size_kb": 0, 00:11:41.014 "state": "online", 00:11:41.014 "raid_level": "raid1", 00:11:41.014 "superblock": true, 00:11:41.014 "num_base_bdevs": 2, 00:11:41.014 "num_base_bdevs_discovered": 1, 00:11:41.014 "num_base_bdevs_operational": 1, 00:11:41.014 "base_bdevs_list": [ 00:11:41.014 { 00:11:41.014 "name": null, 00:11:41.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.014 "is_configured": false, 00:11:41.014 "data_offset": 0, 00:11:41.014 "data_size": 63488 00:11:41.014 }, 00:11:41.014 { 00:11:41.014 "name": "BaseBdev2", 00:11:41.014 "uuid": "1a0160ff-f0ce-5ecb-9dac-bef93bd634bb", 00:11:41.014 "is_configured": true, 00:11:41.014 "data_offset": 2048, 00:11:41.014 "data_size": 63488 00:11:41.014 } 00:11:41.014 ] 00:11:41.014 }' 00:11:41.014 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:41.274 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:41.274 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:41.274 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:41.274 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87494 00:11:41.274 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 87494 ']' 00:11:41.274 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 87494 00:11:41.274 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:11:41.274 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:41.274 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87494 00:11:41.274 killing process with pid 87494 00:11:41.274 Received shutdown signal, test time was about 16.557582 seconds 00:11:41.274 00:11:41.274 Latency(us) 00:11:41.274 [2024-12-14T05:00:52.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.274 [2024-12-14T05:00:52.157Z] =================================================================================================================== 00:11:41.274 [2024-12-14T05:00:52.157Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:41.274 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:41.274 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:41.274 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87494' 00:11:41.274 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 87494 00:11:41.274 [2024-12-14 05:00:51.987359] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:41.274 [2024-12-14 05:00:51.987479] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.274 [2024-12-14 05:00:51.987530] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.274 [2024-12-14 05:00:51.987539] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:11:41.274 05:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 87494 00:11:41.274 [2024-12-14 05:00:52.013800] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:41.534 ************************************ 00:11:41.534 END TEST raid_rebuild_test_sb_io 00:11:41.534 ************************************ 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:41.534 00:11:41.534 real 0m18.429s 00:11:41.534 user 0m24.656s 00:11:41.534 sys 0m2.015s 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.534 05:00:52 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:41.534 05:00:52 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:11:41.534 05:00:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:41.534 05:00:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:41.534 05:00:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:41.534 ************************************ 00:11:41.534 START TEST raid_rebuild_test 00:11:41.534 ************************************ 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=88169 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 88169 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 88169 ']' 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:41.534 05:00:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.534 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:41.534 Zero copy mechanism will not be used. 00:11:41.534 [2024-12-14 05:00:52.413871] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:41.534 [2024-12-14 05:00:52.413997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88169 ] 00:11:41.794 [2024-12-14 05:00:52.574509] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.794 [2024-12-14 05:00:52.620882] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.794 [2024-12-14 05:00:52.662797] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.794 [2024-12-14 05:00:52.662837] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.364 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:42.364 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:11:42.364 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:42.364 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:42.364 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.364 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.624 BaseBdev1_malloc 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.624 [2024-12-14 05:00:53.257749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:42.624 [2024-12-14 05:00:53.257875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.624 [2024-12-14 05:00:53.257925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:42.624 [2024-12-14 05:00:53.257973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.624 [2024-12-14 05:00:53.260111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.624 [2024-12-14 05:00:53.260217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:42.624 BaseBdev1 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.624 BaseBdev2_malloc 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.624 [2024-12-14 05:00:53.299177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:42.624 [2024-12-14 05:00:53.299388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.624 [2024-12-14 05:00:53.299509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:42.624 [2024-12-14 05:00:53.299621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.624 BaseBdev2 00:11:42.624 [2024-12-14 05:00:53.304424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.624 [2024-12-14 05:00:53.304497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.624 BaseBdev3_malloc 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.624 [2024-12-14 05:00:53.330316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:42.624 [2024-12-14 05:00:53.330416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.624 [2024-12-14 05:00:53.330457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:42.624 [2024-12-14 05:00:53.330504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.624 [2024-12-14 05:00:53.332555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.624 [2024-12-14 05:00:53.332625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:42.624 BaseBdev3 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.624 BaseBdev4_malloc 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.624 [2024-12-14 05:00:53.358753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:42.624 [2024-12-14 05:00:53.358862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.624 [2024-12-14 05:00:53.358904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:42.624 [2024-12-14 05:00:53.358948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.624 [2024-12-14 05:00:53.360994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.624 [2024-12-14 05:00:53.361065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:42.624 BaseBdev4 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.624 spare_malloc 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.624 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.624 spare_delay 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.625 [2024-12-14 05:00:53.407148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:42.625 [2024-12-14 05:00:53.407209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.625 [2024-12-14 05:00:53.407255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:42.625 [2024-12-14 05:00:53.407264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.625 [2024-12-14 05:00:53.409292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.625 [2024-12-14 05:00:53.409329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:42.625 spare 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.625 [2024-12-14 05:00:53.419240] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.625 [2024-12-14 05:00:53.421311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:42.625 [2024-12-14 05:00:53.421451] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:42.625 [2024-12-14 05:00:53.421531] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:42.625 [2024-12-14 05:00:53.421669] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:42.625 [2024-12-14 05:00:53.421721] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:42.625 [2024-12-14 05:00:53.421992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:42.625 [2024-12-14 05:00:53.422196] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:42.625 [2024-12-14 05:00:53.422249] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:42.625 [2024-12-14 05:00:53.422469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.625 "name": "raid_bdev1", 00:11:42.625 "uuid": "d49f2e7f-ebed-4b1d-bb7f-95ecb669bda7", 00:11:42.625 "strip_size_kb": 0, 00:11:42.625 "state": "online", 00:11:42.625 "raid_level": "raid1", 00:11:42.625 "superblock": false, 00:11:42.625 "num_base_bdevs": 4, 00:11:42.625 "num_base_bdevs_discovered": 4, 00:11:42.625 "num_base_bdevs_operational": 4, 00:11:42.625 "base_bdevs_list": [ 00:11:42.625 { 00:11:42.625 "name": "BaseBdev1", 00:11:42.625 "uuid": "7d5b8a2d-49ce-55f4-87a3-bd4ffb45e20d", 00:11:42.625 "is_configured": true, 00:11:42.625 "data_offset": 0, 00:11:42.625 "data_size": 65536 00:11:42.625 }, 00:11:42.625 { 00:11:42.625 "name": "BaseBdev2", 00:11:42.625 "uuid": "5ca99e4a-b370-5e46-873e-50f0453c73f8", 00:11:42.625 "is_configured": true, 00:11:42.625 "data_offset": 0, 00:11:42.625 "data_size": 65536 00:11:42.625 }, 00:11:42.625 { 00:11:42.625 "name": "BaseBdev3", 00:11:42.625 "uuid": "ae1a4421-1c43-5e8c-8923-2556fbeb5429", 00:11:42.625 "is_configured": true, 00:11:42.625 "data_offset": 0, 00:11:42.625 "data_size": 65536 00:11:42.625 }, 00:11:42.625 { 00:11:42.625 "name": "BaseBdev4", 00:11:42.625 "uuid": "86cc8791-5a93-53b8-8650-2e9e1fab2e7e", 00:11:42.625 "is_configured": true, 00:11:42.625 "data_offset": 0, 00:11:42.625 "data_size": 65536 00:11:42.625 } 00:11:42.625 ] 00:11:42.625 }' 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.625 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.195 [2024-12-14 05:00:53.886654] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:43.195 05:00:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:43.455 [2024-12-14 05:00:54.157962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:43.455 /dev/nbd0 00:11:43.455 05:00:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:43.455 05:00:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:43.455 05:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:43.455 05:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:43.455 05:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:43.455 05:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:43.455 05:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:43.455 05:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:43.455 05:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:43.455 05:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:43.455 05:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:43.455 1+0 records in 00:11:43.455 1+0 records out 00:11:43.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250914 s, 16.3 MB/s 00:11:43.455 05:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.455 05:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:43.455 05:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.455 05:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:43.455 05:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:43.455 05:00:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:43.455 05:00:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:43.455 05:00:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:43.455 05:00:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:43.455 05:00:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:48.738 65536+0 records in 00:11:48.738 65536+0 records out 00:11:48.738 33554432 bytes (34 MB, 32 MiB) copied, 4.83569 s, 6.9 MB/s 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:48.738 [2024-12-14 05:00:59.260205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.738 [2024-12-14 05:00:59.268279] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.738 "name": "raid_bdev1", 00:11:48.738 "uuid": "d49f2e7f-ebed-4b1d-bb7f-95ecb669bda7", 00:11:48.738 "strip_size_kb": 0, 00:11:48.738 "state": "online", 00:11:48.738 "raid_level": "raid1", 00:11:48.738 "superblock": false, 00:11:48.738 "num_base_bdevs": 4, 00:11:48.738 "num_base_bdevs_discovered": 3, 00:11:48.738 "num_base_bdevs_operational": 3, 00:11:48.738 "base_bdevs_list": [ 00:11:48.738 { 00:11:48.738 "name": null, 00:11:48.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.738 "is_configured": false, 00:11:48.738 "data_offset": 0, 00:11:48.738 "data_size": 65536 00:11:48.738 }, 00:11:48.738 { 00:11:48.738 "name": "BaseBdev2", 00:11:48.738 "uuid": "5ca99e4a-b370-5e46-873e-50f0453c73f8", 00:11:48.738 "is_configured": true, 00:11:48.738 "data_offset": 0, 00:11:48.738 "data_size": 65536 00:11:48.738 }, 00:11:48.738 { 00:11:48.738 "name": "BaseBdev3", 00:11:48.738 "uuid": "ae1a4421-1c43-5e8c-8923-2556fbeb5429", 00:11:48.738 "is_configured": true, 00:11:48.738 "data_offset": 0, 00:11:48.738 "data_size": 65536 00:11:48.738 }, 00:11:48.738 { 00:11:48.738 "name": "BaseBdev4", 00:11:48.738 "uuid": "86cc8791-5a93-53b8-8650-2e9e1fab2e7e", 00:11:48.738 "is_configured": true, 00:11:48.738 "data_offset": 0, 00:11:48.738 "data_size": 65536 00:11:48.738 } 00:11:48.738 ] 00:11:48.738 }' 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.738 05:00:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.998 05:00:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:48.998 05:00:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.998 05:00:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.998 [2024-12-14 05:00:59.703530] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:48.998 [2024-12-14 05:00:59.706954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:11:48.998 [2024-12-14 05:00:59.708902] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:48.998 05:00:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.998 05:00:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:49.937 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:49.937 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:49.937 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:49.937 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:49.937 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:49.937 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.937 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.937 05:01:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.937 05:01:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.937 05:01:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.937 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:49.937 "name": "raid_bdev1", 00:11:49.937 "uuid": "d49f2e7f-ebed-4b1d-bb7f-95ecb669bda7", 00:11:49.937 "strip_size_kb": 0, 00:11:49.937 "state": "online", 00:11:49.937 "raid_level": "raid1", 00:11:49.937 "superblock": false, 00:11:49.937 "num_base_bdevs": 4, 00:11:49.937 "num_base_bdevs_discovered": 4, 00:11:49.937 "num_base_bdevs_operational": 4, 00:11:49.937 "process": { 00:11:49.937 "type": "rebuild", 00:11:49.937 "target": "spare", 00:11:49.937 "progress": { 00:11:49.937 "blocks": 20480, 00:11:49.937 "percent": 31 00:11:49.937 } 00:11:49.937 }, 00:11:49.937 "base_bdevs_list": [ 00:11:49.937 { 00:11:49.937 "name": "spare", 00:11:49.937 "uuid": "6bc77546-4df1-5f80-a80a-a4557cc0b785", 00:11:49.937 "is_configured": true, 00:11:49.937 "data_offset": 0, 00:11:49.937 "data_size": 65536 00:11:49.937 }, 00:11:49.937 { 00:11:49.937 "name": "BaseBdev2", 00:11:49.937 "uuid": "5ca99e4a-b370-5e46-873e-50f0453c73f8", 00:11:49.937 "is_configured": true, 00:11:49.937 "data_offset": 0, 00:11:49.937 "data_size": 65536 00:11:49.937 }, 00:11:49.937 { 00:11:49.937 "name": "BaseBdev3", 00:11:49.937 "uuid": "ae1a4421-1c43-5e8c-8923-2556fbeb5429", 00:11:49.937 "is_configured": true, 00:11:49.937 "data_offset": 0, 00:11:49.937 "data_size": 65536 00:11:49.937 }, 00:11:49.937 { 00:11:49.937 "name": "BaseBdev4", 00:11:49.937 "uuid": "86cc8791-5a93-53b8-8650-2e9e1fab2e7e", 00:11:49.937 "is_configured": true, 00:11:49.937 "data_offset": 0, 00:11:49.937 "data_size": 65536 00:11:49.937 } 00:11:49.937 ] 00:11:49.937 }' 00:11:49.937 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:49.937 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:49.937 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.197 [2024-12-14 05:01:00.867513] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:50.197 [2024-12-14 05:01:00.913567] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:50.197 [2024-12-14 05:01:00.913624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.197 [2024-12-14 05:01:00.913641] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:50.197 [2024-12-14 05:01:00.913649] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.197 "name": "raid_bdev1", 00:11:50.197 "uuid": "d49f2e7f-ebed-4b1d-bb7f-95ecb669bda7", 00:11:50.197 "strip_size_kb": 0, 00:11:50.197 "state": "online", 00:11:50.197 "raid_level": "raid1", 00:11:50.197 "superblock": false, 00:11:50.197 "num_base_bdevs": 4, 00:11:50.197 "num_base_bdevs_discovered": 3, 00:11:50.197 "num_base_bdevs_operational": 3, 00:11:50.197 "base_bdevs_list": [ 00:11:50.197 { 00:11:50.197 "name": null, 00:11:50.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.197 "is_configured": false, 00:11:50.197 "data_offset": 0, 00:11:50.197 "data_size": 65536 00:11:50.197 }, 00:11:50.197 { 00:11:50.197 "name": "BaseBdev2", 00:11:50.197 "uuid": "5ca99e4a-b370-5e46-873e-50f0453c73f8", 00:11:50.197 "is_configured": true, 00:11:50.197 "data_offset": 0, 00:11:50.197 "data_size": 65536 00:11:50.197 }, 00:11:50.197 { 00:11:50.197 "name": "BaseBdev3", 00:11:50.197 "uuid": "ae1a4421-1c43-5e8c-8923-2556fbeb5429", 00:11:50.197 "is_configured": true, 00:11:50.197 "data_offset": 0, 00:11:50.197 "data_size": 65536 00:11:50.197 }, 00:11:50.197 { 00:11:50.197 "name": "BaseBdev4", 00:11:50.197 "uuid": "86cc8791-5a93-53b8-8650-2e9e1fab2e7e", 00:11:50.197 "is_configured": true, 00:11:50.197 "data_offset": 0, 00:11:50.197 "data_size": 65536 00:11:50.197 } 00:11:50.197 ] 00:11:50.197 }' 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.197 05:01:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.457 05:01:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:50.457 05:01:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:50.457 05:01:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:50.457 05:01:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:50.457 05:01:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:50.457 05:01:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.723 05:01:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.723 05:01:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.723 05:01:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.723 05:01:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.723 05:01:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:50.723 "name": "raid_bdev1", 00:11:50.723 "uuid": "d49f2e7f-ebed-4b1d-bb7f-95ecb669bda7", 00:11:50.723 "strip_size_kb": 0, 00:11:50.723 "state": "online", 00:11:50.723 "raid_level": "raid1", 00:11:50.723 "superblock": false, 00:11:50.723 "num_base_bdevs": 4, 00:11:50.723 "num_base_bdevs_discovered": 3, 00:11:50.723 "num_base_bdevs_operational": 3, 00:11:50.723 "base_bdevs_list": [ 00:11:50.723 { 00:11:50.723 "name": null, 00:11:50.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.723 "is_configured": false, 00:11:50.723 "data_offset": 0, 00:11:50.723 "data_size": 65536 00:11:50.723 }, 00:11:50.723 { 00:11:50.723 "name": "BaseBdev2", 00:11:50.723 "uuid": "5ca99e4a-b370-5e46-873e-50f0453c73f8", 00:11:50.723 "is_configured": true, 00:11:50.723 "data_offset": 0, 00:11:50.723 "data_size": 65536 00:11:50.723 }, 00:11:50.723 { 00:11:50.723 "name": "BaseBdev3", 00:11:50.723 "uuid": "ae1a4421-1c43-5e8c-8923-2556fbeb5429", 00:11:50.723 "is_configured": true, 00:11:50.723 "data_offset": 0, 00:11:50.723 "data_size": 65536 00:11:50.723 }, 00:11:50.723 { 00:11:50.723 "name": "BaseBdev4", 00:11:50.723 "uuid": "86cc8791-5a93-53b8-8650-2e9e1fab2e7e", 00:11:50.723 "is_configured": true, 00:11:50.723 "data_offset": 0, 00:11:50.723 "data_size": 65536 00:11:50.723 } 00:11:50.723 ] 00:11:50.723 }' 00:11:50.723 05:01:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:50.723 05:01:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:50.723 05:01:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:50.723 05:01:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:50.723 05:01:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:50.723 05:01:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.723 05:01:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.723 [2024-12-14 05:01:01.464770] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:50.723 [2024-12-14 05:01:01.468119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:11:50.723 [2024-12-14 05:01:01.469990] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:50.723 05:01:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.723 05:01:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:51.689 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:51.689 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:51.689 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:51.689 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:51.689 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:51.689 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.689 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.689 05:01:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.689 05:01:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.689 05:01:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.689 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:51.689 "name": "raid_bdev1", 00:11:51.689 "uuid": "d49f2e7f-ebed-4b1d-bb7f-95ecb669bda7", 00:11:51.689 "strip_size_kb": 0, 00:11:51.689 "state": "online", 00:11:51.689 "raid_level": "raid1", 00:11:51.689 "superblock": false, 00:11:51.689 "num_base_bdevs": 4, 00:11:51.689 "num_base_bdevs_discovered": 4, 00:11:51.689 "num_base_bdevs_operational": 4, 00:11:51.689 "process": { 00:11:51.689 "type": "rebuild", 00:11:51.689 "target": "spare", 00:11:51.689 "progress": { 00:11:51.689 "blocks": 20480, 00:11:51.689 "percent": 31 00:11:51.689 } 00:11:51.689 }, 00:11:51.689 "base_bdevs_list": [ 00:11:51.689 { 00:11:51.689 "name": "spare", 00:11:51.689 "uuid": "6bc77546-4df1-5f80-a80a-a4557cc0b785", 00:11:51.689 "is_configured": true, 00:11:51.689 "data_offset": 0, 00:11:51.689 "data_size": 65536 00:11:51.689 }, 00:11:51.689 { 00:11:51.689 "name": "BaseBdev2", 00:11:51.689 "uuid": "5ca99e4a-b370-5e46-873e-50f0453c73f8", 00:11:51.689 "is_configured": true, 00:11:51.689 "data_offset": 0, 00:11:51.689 "data_size": 65536 00:11:51.689 }, 00:11:51.689 { 00:11:51.689 "name": "BaseBdev3", 00:11:51.689 "uuid": "ae1a4421-1c43-5e8c-8923-2556fbeb5429", 00:11:51.689 "is_configured": true, 00:11:51.689 "data_offset": 0, 00:11:51.689 "data_size": 65536 00:11:51.689 }, 00:11:51.689 { 00:11:51.689 "name": "BaseBdev4", 00:11:51.689 "uuid": "86cc8791-5a93-53b8-8650-2e9e1fab2e7e", 00:11:51.689 "is_configured": true, 00:11:51.689 "data_offset": 0, 00:11:51.689 "data_size": 65536 00:11:51.689 } 00:11:51.689 ] 00:11:51.689 }' 00:11:51.689 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.949 [2024-12-14 05:01:02.628895] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:51.949 [2024-12-14 05:01:02.674177] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09ca0 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:51.949 "name": "raid_bdev1", 00:11:51.949 "uuid": "d49f2e7f-ebed-4b1d-bb7f-95ecb669bda7", 00:11:51.949 "strip_size_kb": 0, 00:11:51.949 "state": "online", 00:11:51.949 "raid_level": "raid1", 00:11:51.949 "superblock": false, 00:11:51.949 "num_base_bdevs": 4, 00:11:51.949 "num_base_bdevs_discovered": 3, 00:11:51.949 "num_base_bdevs_operational": 3, 00:11:51.949 "process": { 00:11:51.949 "type": "rebuild", 00:11:51.949 "target": "spare", 00:11:51.949 "progress": { 00:11:51.949 "blocks": 24576, 00:11:51.949 "percent": 37 00:11:51.949 } 00:11:51.949 }, 00:11:51.949 "base_bdevs_list": [ 00:11:51.949 { 00:11:51.949 "name": "spare", 00:11:51.949 "uuid": "6bc77546-4df1-5f80-a80a-a4557cc0b785", 00:11:51.949 "is_configured": true, 00:11:51.949 "data_offset": 0, 00:11:51.949 "data_size": 65536 00:11:51.949 }, 00:11:51.949 { 00:11:51.949 "name": null, 00:11:51.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.949 "is_configured": false, 00:11:51.949 "data_offset": 0, 00:11:51.949 "data_size": 65536 00:11:51.949 }, 00:11:51.949 { 00:11:51.949 "name": "BaseBdev3", 00:11:51.949 "uuid": "ae1a4421-1c43-5e8c-8923-2556fbeb5429", 00:11:51.949 "is_configured": true, 00:11:51.949 "data_offset": 0, 00:11:51.949 "data_size": 65536 00:11:51.949 }, 00:11:51.949 { 00:11:51.949 "name": "BaseBdev4", 00:11:51.949 "uuid": "86cc8791-5a93-53b8-8650-2e9e1fab2e7e", 00:11:51.949 "is_configured": true, 00:11:51.949 "data_offset": 0, 00:11:51.949 "data_size": 65536 00:11:51.949 } 00:11:51.949 ] 00:11:51.949 }' 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=353 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.949 05:01:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.209 05:01:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.209 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:52.209 "name": "raid_bdev1", 00:11:52.209 "uuid": "d49f2e7f-ebed-4b1d-bb7f-95ecb669bda7", 00:11:52.209 "strip_size_kb": 0, 00:11:52.209 "state": "online", 00:11:52.209 "raid_level": "raid1", 00:11:52.209 "superblock": false, 00:11:52.209 "num_base_bdevs": 4, 00:11:52.209 "num_base_bdevs_discovered": 3, 00:11:52.209 "num_base_bdevs_operational": 3, 00:11:52.209 "process": { 00:11:52.209 "type": "rebuild", 00:11:52.209 "target": "spare", 00:11:52.209 "progress": { 00:11:52.209 "blocks": 26624, 00:11:52.209 "percent": 40 00:11:52.209 } 00:11:52.209 }, 00:11:52.209 "base_bdevs_list": [ 00:11:52.209 { 00:11:52.209 "name": "spare", 00:11:52.209 "uuid": "6bc77546-4df1-5f80-a80a-a4557cc0b785", 00:11:52.209 "is_configured": true, 00:11:52.209 "data_offset": 0, 00:11:52.209 "data_size": 65536 00:11:52.209 }, 00:11:52.209 { 00:11:52.209 "name": null, 00:11:52.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.209 "is_configured": false, 00:11:52.209 "data_offset": 0, 00:11:52.209 "data_size": 65536 00:11:52.209 }, 00:11:52.209 { 00:11:52.209 "name": "BaseBdev3", 00:11:52.209 "uuid": "ae1a4421-1c43-5e8c-8923-2556fbeb5429", 00:11:52.209 "is_configured": true, 00:11:52.209 "data_offset": 0, 00:11:52.209 "data_size": 65536 00:11:52.209 }, 00:11:52.209 { 00:11:52.209 "name": "BaseBdev4", 00:11:52.209 "uuid": "86cc8791-5a93-53b8-8650-2e9e1fab2e7e", 00:11:52.209 "is_configured": true, 00:11:52.209 "data_offset": 0, 00:11:52.209 "data_size": 65536 00:11:52.209 } 00:11:52.209 ] 00:11:52.209 }' 00:11:52.209 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.209 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:52.209 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.210 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:52.210 05:01:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:53.147 05:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:53.147 05:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:53.147 05:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.147 05:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:53.147 05:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:53.147 05:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.147 05:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.147 05:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.147 05:01:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.147 05:01:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.147 05:01:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.147 05:01:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.147 "name": "raid_bdev1", 00:11:53.147 "uuid": "d49f2e7f-ebed-4b1d-bb7f-95ecb669bda7", 00:11:53.147 "strip_size_kb": 0, 00:11:53.147 "state": "online", 00:11:53.147 "raid_level": "raid1", 00:11:53.147 "superblock": false, 00:11:53.147 "num_base_bdevs": 4, 00:11:53.147 "num_base_bdevs_discovered": 3, 00:11:53.147 "num_base_bdevs_operational": 3, 00:11:53.147 "process": { 00:11:53.147 "type": "rebuild", 00:11:53.147 "target": "spare", 00:11:53.147 "progress": { 00:11:53.147 "blocks": 49152, 00:11:53.147 "percent": 75 00:11:53.147 } 00:11:53.147 }, 00:11:53.147 "base_bdevs_list": [ 00:11:53.147 { 00:11:53.147 "name": "spare", 00:11:53.147 "uuid": "6bc77546-4df1-5f80-a80a-a4557cc0b785", 00:11:53.147 "is_configured": true, 00:11:53.147 "data_offset": 0, 00:11:53.147 "data_size": 65536 00:11:53.147 }, 00:11:53.147 { 00:11:53.147 "name": null, 00:11:53.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.147 "is_configured": false, 00:11:53.147 "data_offset": 0, 00:11:53.147 "data_size": 65536 00:11:53.147 }, 00:11:53.147 { 00:11:53.147 "name": "BaseBdev3", 00:11:53.147 "uuid": "ae1a4421-1c43-5e8c-8923-2556fbeb5429", 00:11:53.147 "is_configured": true, 00:11:53.147 "data_offset": 0, 00:11:53.147 "data_size": 65536 00:11:53.147 }, 00:11:53.147 { 00:11:53.147 "name": "BaseBdev4", 00:11:53.147 "uuid": "86cc8791-5a93-53b8-8650-2e9e1fab2e7e", 00:11:53.147 "is_configured": true, 00:11:53.147 "data_offset": 0, 00:11:53.147 "data_size": 65536 00:11:53.148 } 00:11:53.148 ] 00:11:53.148 }' 00:11:53.148 05:01:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.407 05:01:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:53.407 05:01:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.407 05:01:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:53.407 05:01:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:53.976 [2024-12-14 05:01:04.681173] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:53.976 [2024-12-14 05:01:04.681240] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:53.976 [2024-12-14 05:01:04.681281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.235 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:54.235 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:54.235 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:54.235 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:54.235 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:54.235 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:54.235 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.235 05:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.235 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.235 05:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.497 05:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.497 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:54.497 "name": "raid_bdev1", 00:11:54.497 "uuid": "d49f2e7f-ebed-4b1d-bb7f-95ecb669bda7", 00:11:54.497 "strip_size_kb": 0, 00:11:54.497 "state": "online", 00:11:54.497 "raid_level": "raid1", 00:11:54.497 "superblock": false, 00:11:54.497 "num_base_bdevs": 4, 00:11:54.497 "num_base_bdevs_discovered": 3, 00:11:54.497 "num_base_bdevs_operational": 3, 00:11:54.497 "base_bdevs_list": [ 00:11:54.497 { 00:11:54.497 "name": "spare", 00:11:54.497 "uuid": "6bc77546-4df1-5f80-a80a-a4557cc0b785", 00:11:54.497 "is_configured": true, 00:11:54.497 "data_offset": 0, 00:11:54.497 "data_size": 65536 00:11:54.497 }, 00:11:54.497 { 00:11:54.497 "name": null, 00:11:54.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.497 "is_configured": false, 00:11:54.497 "data_offset": 0, 00:11:54.497 "data_size": 65536 00:11:54.497 }, 00:11:54.497 { 00:11:54.497 "name": "BaseBdev3", 00:11:54.497 "uuid": "ae1a4421-1c43-5e8c-8923-2556fbeb5429", 00:11:54.497 "is_configured": true, 00:11:54.497 "data_offset": 0, 00:11:54.497 "data_size": 65536 00:11:54.497 }, 00:11:54.497 { 00:11:54.497 "name": "BaseBdev4", 00:11:54.497 "uuid": "86cc8791-5a93-53b8-8650-2e9e1fab2e7e", 00:11:54.497 "is_configured": true, 00:11:54.497 "data_offset": 0, 00:11:54.497 "data_size": 65536 00:11:54.497 } 00:11:54.497 ] 00:11:54.497 }' 00:11:54.497 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:54.497 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:54.497 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:54.497 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:54.497 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:54.497 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:54.497 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:54.497 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:54.497 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:54.497 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:54.497 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.497 05:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.497 05:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.497 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.497 05:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.497 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:54.497 "name": "raid_bdev1", 00:11:54.497 "uuid": "d49f2e7f-ebed-4b1d-bb7f-95ecb669bda7", 00:11:54.497 "strip_size_kb": 0, 00:11:54.497 "state": "online", 00:11:54.497 "raid_level": "raid1", 00:11:54.497 "superblock": false, 00:11:54.497 "num_base_bdevs": 4, 00:11:54.497 "num_base_bdevs_discovered": 3, 00:11:54.497 "num_base_bdevs_operational": 3, 00:11:54.497 "base_bdevs_list": [ 00:11:54.497 { 00:11:54.497 "name": "spare", 00:11:54.497 "uuid": "6bc77546-4df1-5f80-a80a-a4557cc0b785", 00:11:54.498 "is_configured": true, 00:11:54.498 "data_offset": 0, 00:11:54.498 "data_size": 65536 00:11:54.498 }, 00:11:54.498 { 00:11:54.498 "name": null, 00:11:54.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.498 "is_configured": false, 00:11:54.498 "data_offset": 0, 00:11:54.498 "data_size": 65536 00:11:54.498 }, 00:11:54.498 { 00:11:54.498 "name": "BaseBdev3", 00:11:54.498 "uuid": "ae1a4421-1c43-5e8c-8923-2556fbeb5429", 00:11:54.498 "is_configured": true, 00:11:54.498 "data_offset": 0, 00:11:54.498 "data_size": 65536 00:11:54.498 }, 00:11:54.498 { 00:11:54.498 "name": "BaseBdev4", 00:11:54.498 "uuid": "86cc8791-5a93-53b8-8650-2e9e1fab2e7e", 00:11:54.498 "is_configured": true, 00:11:54.498 "data_offset": 0, 00:11:54.498 "data_size": 65536 00:11:54.498 } 00:11:54.498 ] 00:11:54.498 }' 00:11:54.498 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:54.498 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:54.498 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:54.757 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:54.757 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:54.757 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.757 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.757 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.757 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.757 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:54.757 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.757 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.757 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.757 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.757 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.757 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.757 05:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.757 05:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.757 05:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.757 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.757 "name": "raid_bdev1", 00:11:54.757 "uuid": "d49f2e7f-ebed-4b1d-bb7f-95ecb669bda7", 00:11:54.757 "strip_size_kb": 0, 00:11:54.757 "state": "online", 00:11:54.757 "raid_level": "raid1", 00:11:54.757 "superblock": false, 00:11:54.757 "num_base_bdevs": 4, 00:11:54.757 "num_base_bdevs_discovered": 3, 00:11:54.757 "num_base_bdevs_operational": 3, 00:11:54.757 "base_bdevs_list": [ 00:11:54.757 { 00:11:54.757 "name": "spare", 00:11:54.757 "uuid": "6bc77546-4df1-5f80-a80a-a4557cc0b785", 00:11:54.757 "is_configured": true, 00:11:54.757 "data_offset": 0, 00:11:54.757 "data_size": 65536 00:11:54.757 }, 00:11:54.757 { 00:11:54.757 "name": null, 00:11:54.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.757 "is_configured": false, 00:11:54.757 "data_offset": 0, 00:11:54.757 "data_size": 65536 00:11:54.757 }, 00:11:54.757 { 00:11:54.757 "name": "BaseBdev3", 00:11:54.757 "uuid": "ae1a4421-1c43-5e8c-8923-2556fbeb5429", 00:11:54.757 "is_configured": true, 00:11:54.757 "data_offset": 0, 00:11:54.757 "data_size": 65536 00:11:54.757 }, 00:11:54.757 { 00:11:54.757 "name": "BaseBdev4", 00:11:54.757 "uuid": "86cc8791-5a93-53b8-8650-2e9e1fab2e7e", 00:11:54.757 "is_configured": true, 00:11:54.757 "data_offset": 0, 00:11:54.757 "data_size": 65536 00:11:54.757 } 00:11:54.757 ] 00:11:54.757 }' 00:11:54.757 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.757 05:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.016 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:55.016 05:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.016 05:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.016 [2024-12-14 05:01:05.787005] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:55.016 [2024-12-14 05:01:05.787075] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:55.016 [2024-12-14 05:01:05.787229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.016 [2024-12-14 05:01:05.787356] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:55.016 [2024-12-14 05:01:05.787431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:55.016 05:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.017 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.017 05:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.017 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:55.017 05:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.017 05:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.017 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:55.017 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:55.017 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:55.017 05:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:55.017 05:01:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:55.017 05:01:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:55.017 05:01:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:55.017 05:01:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:55.017 05:01:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:55.017 05:01:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:55.017 05:01:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:55.017 05:01:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:55.017 05:01:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:55.276 /dev/nbd0 00:11:55.276 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:55.276 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:55.277 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:55.277 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:55.277 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:55.277 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:55.277 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:55.277 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:55.277 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:55.277 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:55.277 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.277 1+0 records in 00:11:55.277 1+0 records out 00:11:55.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509126 s, 8.0 MB/s 00:11:55.277 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.277 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:55.277 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.277 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:55.277 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:55.277 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.277 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:55.277 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:55.536 /dev/nbd1 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.536 1+0 records in 00:11:55.536 1+0 records out 00:11:55.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386485 s, 10.6 MB/s 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:55.536 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:55.795 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:55.795 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:55.795 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:55.795 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:55.795 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:55.795 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:55.795 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:55.795 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:55.795 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:55.795 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:56.055 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:56.055 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:56.055 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:56.055 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:56.055 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:56.055 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:56.055 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:56.055 05:01:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:56.055 05:01:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:56.055 05:01:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 88169 00:11:56.055 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 88169 ']' 00:11:56.055 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 88169 00:11:56.055 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:11:56.055 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:56.055 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88169 00:11:56.055 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:56.055 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:56.055 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88169' 00:11:56.055 killing process with pid 88169 00:11:56.055 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 88169 00:11:56.055 Received shutdown signal, test time was about 60.000000 seconds 00:11:56.055 00:11:56.055 Latency(us) 00:11:56.055 [2024-12-14T05:01:06.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.055 [2024-12-14T05:01:06.938Z] =================================================================================================================== 00:11:56.055 [2024-12-14T05:01:06.938Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:56.055 [2024-12-14 05:01:06.836760] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:56.055 05:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 88169 00:11:56.055 [2024-12-14 05:01:06.887186] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:56.316 05:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:56.316 00:11:56.316 real 0m14.806s 00:11:56.316 user 0m17.223s 00:11:56.316 sys 0m2.775s 00:11:56.316 05:01:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:56.316 05:01:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.316 ************************************ 00:11:56.316 END TEST raid_rebuild_test 00:11:56.316 ************************************ 00:11:56.316 05:01:07 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:11:56.316 05:01:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:56.316 05:01:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:56.316 05:01:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:56.576 ************************************ 00:11:56.576 START TEST raid_rebuild_test_sb 00:11:56.576 ************************************ 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88594 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88594 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 88594 ']' 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:56.576 05:01:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.576 [2024-12-14 05:01:07.304472] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:56.576 [2024-12-14 05:01:07.304677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:11:56.576 Zero copy mechanism will not be used. 00:11:56.576 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88594 ] 00:11:56.836 [2024-12-14 05:01:07.465283] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.836 [2024-12-14 05:01:07.512292] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.836 [2024-12-14 05:01:07.555147] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.836 [2024-12-14 05:01:07.555310] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.406 BaseBdev1_malloc 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.406 [2024-12-14 05:01:08.154117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:57.406 [2024-12-14 05:01:08.154264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.406 [2024-12-14 05:01:08.154311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:57.406 [2024-12-14 05:01:08.154345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.406 [2024-12-14 05:01:08.156486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.406 [2024-12-14 05:01:08.156556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:57.406 BaseBdev1 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.406 BaseBdev2_malloc 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.406 [2024-12-14 05:01:08.200224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:57.406 [2024-12-14 05:01:08.200424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.406 [2024-12-14 05:01:08.200544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:57.406 [2024-12-14 05:01:08.200652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.406 [2024-12-14 05:01:08.204867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.406 [2024-12-14 05:01:08.204919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:57.406 BaseBdev2 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.406 BaseBdev3_malloc 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.406 [2024-12-14 05:01:08.230974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:57.406 [2024-12-14 05:01:08.231021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.406 [2024-12-14 05:01:08.231059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:57.406 [2024-12-14 05:01:08.231068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.406 [2024-12-14 05:01:08.233147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.406 [2024-12-14 05:01:08.233192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:57.406 BaseBdev3 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.406 BaseBdev4_malloc 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.406 [2024-12-14 05:01:08.259714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:57.406 [2024-12-14 05:01:08.259812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.406 [2024-12-14 05:01:08.259870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:57.406 [2024-12-14 05:01:08.259906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.406 [2024-12-14 05:01:08.261946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.406 [2024-12-14 05:01:08.262014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:57.406 BaseBdev4 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.406 spare_malloc 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.406 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.666 spare_delay 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.666 [2024-12-14 05:01:08.300295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:57.666 [2024-12-14 05:01:08.300403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.666 [2024-12-14 05:01:08.300442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:57.666 [2024-12-14 05:01:08.300471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.666 [2024-12-14 05:01:08.302518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.666 [2024-12-14 05:01:08.302583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:57.666 spare 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.666 [2024-12-14 05:01:08.312364] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.666 [2024-12-14 05:01:08.314233] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.666 [2024-12-14 05:01:08.314354] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:57.666 [2024-12-14 05:01:08.314430] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:57.666 [2024-12-14 05:01:08.314661] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:57.666 [2024-12-14 05:01:08.314711] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:57.666 [2024-12-14 05:01:08.314969] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:57.666 [2024-12-14 05:01:08.315172] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:57.666 [2024-12-14 05:01:08.315233] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:57.666 [2024-12-14 05:01:08.315419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.666 "name": "raid_bdev1", 00:11:57.666 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:11:57.666 "strip_size_kb": 0, 00:11:57.666 "state": "online", 00:11:57.666 "raid_level": "raid1", 00:11:57.666 "superblock": true, 00:11:57.666 "num_base_bdevs": 4, 00:11:57.666 "num_base_bdevs_discovered": 4, 00:11:57.666 "num_base_bdevs_operational": 4, 00:11:57.666 "base_bdevs_list": [ 00:11:57.666 { 00:11:57.666 "name": "BaseBdev1", 00:11:57.666 "uuid": "ff7ebbbf-185b-59c2-99e5-80f1a9d132e9", 00:11:57.666 "is_configured": true, 00:11:57.666 "data_offset": 2048, 00:11:57.666 "data_size": 63488 00:11:57.666 }, 00:11:57.666 { 00:11:57.666 "name": "BaseBdev2", 00:11:57.666 "uuid": "bbddab58-0a6f-5817-8f2c-c111bf368a08", 00:11:57.666 "is_configured": true, 00:11:57.666 "data_offset": 2048, 00:11:57.666 "data_size": 63488 00:11:57.666 }, 00:11:57.666 { 00:11:57.666 "name": "BaseBdev3", 00:11:57.666 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:11:57.666 "is_configured": true, 00:11:57.666 "data_offset": 2048, 00:11:57.666 "data_size": 63488 00:11:57.666 }, 00:11:57.666 { 00:11:57.666 "name": "BaseBdev4", 00:11:57.666 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:11:57.666 "is_configured": true, 00:11:57.666 "data_offset": 2048, 00:11:57.666 "data_size": 63488 00:11:57.666 } 00:11:57.666 ] 00:11:57.666 }' 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.666 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.925 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:57.925 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.925 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.925 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:57.925 [2024-12-14 05:01:08.771854] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.926 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.186 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:58.186 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:58.186 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.186 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.186 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.186 05:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.186 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:58.186 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:58.186 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:58.186 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:58.186 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:58.186 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:58.186 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:58.186 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:58.186 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:58.186 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:58.186 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:58.186 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:58.186 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:58.186 05:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:58.186 [2024-12-14 05:01:09.047304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:58.445 /dev/nbd0 00:11:58.445 05:01:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:58.445 05:01:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:58.445 05:01:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:58.445 05:01:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:58.445 05:01:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:58.445 05:01:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:58.445 05:01:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:58.445 05:01:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:58.445 05:01:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:58.445 05:01:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:58.445 05:01:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:58.445 1+0 records in 00:11:58.445 1+0 records out 00:11:58.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055132 s, 7.4 MB/s 00:11:58.445 05:01:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.445 05:01:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:58.445 05:01:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.445 05:01:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:58.445 05:01:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:58.445 05:01:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:58.446 05:01:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:58.446 05:01:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:58.446 05:01:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:58.446 05:01:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:03.719 63488+0 records in 00:12:03.719 63488+0 records out 00:12:03.719 32505856 bytes (33 MB, 31 MiB) copied, 5.33201 s, 6.1 MB/s 00:12:03.719 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:03.719 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:03.719 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:03.719 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:03.719 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:03.719 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.719 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:03.979 [2024-12-14 05:01:14.647109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.979 [2024-12-14 05:01:14.683070] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.979 "name": "raid_bdev1", 00:12:03.979 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:03.979 "strip_size_kb": 0, 00:12:03.979 "state": "online", 00:12:03.979 "raid_level": "raid1", 00:12:03.979 "superblock": true, 00:12:03.979 "num_base_bdevs": 4, 00:12:03.979 "num_base_bdevs_discovered": 3, 00:12:03.979 "num_base_bdevs_operational": 3, 00:12:03.979 "base_bdevs_list": [ 00:12:03.979 { 00:12:03.979 "name": null, 00:12:03.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.979 "is_configured": false, 00:12:03.979 "data_offset": 0, 00:12:03.979 "data_size": 63488 00:12:03.979 }, 00:12:03.979 { 00:12:03.979 "name": "BaseBdev2", 00:12:03.979 "uuid": "bbddab58-0a6f-5817-8f2c-c111bf368a08", 00:12:03.979 "is_configured": true, 00:12:03.979 "data_offset": 2048, 00:12:03.979 "data_size": 63488 00:12:03.979 }, 00:12:03.979 { 00:12:03.979 "name": "BaseBdev3", 00:12:03.979 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:03.979 "is_configured": true, 00:12:03.979 "data_offset": 2048, 00:12:03.979 "data_size": 63488 00:12:03.979 }, 00:12:03.979 { 00:12:03.979 "name": "BaseBdev4", 00:12:03.979 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:03.979 "is_configured": true, 00:12:03.979 "data_offset": 2048, 00:12:03.979 "data_size": 63488 00:12:03.979 } 00:12:03.979 ] 00:12:03.979 }' 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.979 05:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.239 05:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:04.239 05:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.239 05:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.239 [2024-12-14 05:01:15.114333] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:04.239 [2024-12-14 05:01:15.117728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:04.239 [2024-12-14 05:01:15.119741] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:04.497 05:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.497 05:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:05.436 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:05.436 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.436 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:05.436 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:05.436 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.436 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.436 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.436 05:01:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.436 05:01:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.436 05:01:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.436 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.436 "name": "raid_bdev1", 00:12:05.436 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:05.436 "strip_size_kb": 0, 00:12:05.436 "state": "online", 00:12:05.436 "raid_level": "raid1", 00:12:05.436 "superblock": true, 00:12:05.436 "num_base_bdevs": 4, 00:12:05.436 "num_base_bdevs_discovered": 4, 00:12:05.436 "num_base_bdevs_operational": 4, 00:12:05.436 "process": { 00:12:05.436 "type": "rebuild", 00:12:05.436 "target": "spare", 00:12:05.436 "progress": { 00:12:05.436 "blocks": 20480, 00:12:05.436 "percent": 32 00:12:05.436 } 00:12:05.436 }, 00:12:05.436 "base_bdevs_list": [ 00:12:05.436 { 00:12:05.436 "name": "spare", 00:12:05.436 "uuid": "5c745e62-bee6-576a-8805-43110bfac419", 00:12:05.436 "is_configured": true, 00:12:05.436 "data_offset": 2048, 00:12:05.436 "data_size": 63488 00:12:05.436 }, 00:12:05.436 { 00:12:05.436 "name": "BaseBdev2", 00:12:05.436 "uuid": "bbddab58-0a6f-5817-8f2c-c111bf368a08", 00:12:05.436 "is_configured": true, 00:12:05.436 "data_offset": 2048, 00:12:05.436 "data_size": 63488 00:12:05.436 }, 00:12:05.436 { 00:12:05.436 "name": "BaseBdev3", 00:12:05.436 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:05.436 "is_configured": true, 00:12:05.436 "data_offset": 2048, 00:12:05.436 "data_size": 63488 00:12:05.436 }, 00:12:05.436 { 00:12:05.436 "name": "BaseBdev4", 00:12:05.436 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:05.436 "is_configured": true, 00:12:05.436 "data_offset": 2048, 00:12:05.436 "data_size": 63488 00:12:05.436 } 00:12:05.437 ] 00:12:05.437 }' 00:12:05.437 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.437 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:05.437 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.437 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:05.437 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:05.437 05:01:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.437 05:01:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.437 [2024-12-14 05:01:16.278599] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:05.697 [2024-12-14 05:01:16.324604] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:05.697 [2024-12-14 05:01:16.324657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.697 [2024-12-14 05:01:16.324678] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:05.697 [2024-12-14 05:01:16.324685] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:05.697 05:01:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.697 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:05.697 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.697 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.697 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.697 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.697 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.697 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.697 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.697 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.697 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.697 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.697 05:01:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.697 05:01:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.697 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.697 05:01:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.697 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.697 "name": "raid_bdev1", 00:12:05.697 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:05.697 "strip_size_kb": 0, 00:12:05.697 "state": "online", 00:12:05.697 "raid_level": "raid1", 00:12:05.697 "superblock": true, 00:12:05.697 "num_base_bdevs": 4, 00:12:05.697 "num_base_bdevs_discovered": 3, 00:12:05.697 "num_base_bdevs_operational": 3, 00:12:05.697 "base_bdevs_list": [ 00:12:05.697 { 00:12:05.697 "name": null, 00:12:05.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.697 "is_configured": false, 00:12:05.697 "data_offset": 0, 00:12:05.697 "data_size": 63488 00:12:05.697 }, 00:12:05.697 { 00:12:05.697 "name": "BaseBdev2", 00:12:05.697 "uuid": "bbddab58-0a6f-5817-8f2c-c111bf368a08", 00:12:05.697 "is_configured": true, 00:12:05.697 "data_offset": 2048, 00:12:05.697 "data_size": 63488 00:12:05.697 }, 00:12:05.697 { 00:12:05.697 "name": "BaseBdev3", 00:12:05.697 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:05.697 "is_configured": true, 00:12:05.697 "data_offset": 2048, 00:12:05.697 "data_size": 63488 00:12:05.697 }, 00:12:05.697 { 00:12:05.697 "name": "BaseBdev4", 00:12:05.697 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:05.697 "is_configured": true, 00:12:05.697 "data_offset": 2048, 00:12:05.697 "data_size": 63488 00:12:05.697 } 00:12:05.697 ] 00:12:05.697 }' 00:12:05.697 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.697 05:01:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.957 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:05.957 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.957 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:05.957 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:05.957 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.957 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.957 05:01:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.957 05:01:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.957 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.957 05:01:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.957 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.957 "name": "raid_bdev1", 00:12:05.957 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:05.957 "strip_size_kb": 0, 00:12:05.957 "state": "online", 00:12:05.957 "raid_level": "raid1", 00:12:05.957 "superblock": true, 00:12:05.957 "num_base_bdevs": 4, 00:12:05.957 "num_base_bdevs_discovered": 3, 00:12:05.957 "num_base_bdevs_operational": 3, 00:12:05.957 "base_bdevs_list": [ 00:12:05.957 { 00:12:05.957 "name": null, 00:12:05.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.957 "is_configured": false, 00:12:05.957 "data_offset": 0, 00:12:05.957 "data_size": 63488 00:12:05.957 }, 00:12:05.957 { 00:12:05.957 "name": "BaseBdev2", 00:12:05.957 "uuid": "bbddab58-0a6f-5817-8f2c-c111bf368a08", 00:12:05.957 "is_configured": true, 00:12:05.957 "data_offset": 2048, 00:12:05.957 "data_size": 63488 00:12:05.957 }, 00:12:05.957 { 00:12:05.957 "name": "BaseBdev3", 00:12:05.957 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:05.957 "is_configured": true, 00:12:05.957 "data_offset": 2048, 00:12:05.957 "data_size": 63488 00:12:05.957 }, 00:12:05.957 { 00:12:05.957 "name": "BaseBdev4", 00:12:05.957 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:05.957 "is_configured": true, 00:12:05.957 "data_offset": 2048, 00:12:05.957 "data_size": 63488 00:12:05.957 } 00:12:05.957 ] 00:12:05.957 }' 00:12:05.957 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.217 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:06.217 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.217 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:06.217 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:06.217 05:01:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.217 05:01:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.217 [2024-12-14 05:01:16.931815] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:06.217 [2024-12-14 05:01:16.935246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:06.217 [2024-12-14 05:01:16.937277] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:06.217 05:01:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.217 05:01:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:07.157 05:01:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:07.157 05:01:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.157 05:01:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:07.157 05:01:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:07.157 05:01:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:07.157 05:01:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.157 05:01:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.157 05:01:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.157 05:01:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.157 05:01:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.157 05:01:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.157 "name": "raid_bdev1", 00:12:07.157 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:07.157 "strip_size_kb": 0, 00:12:07.157 "state": "online", 00:12:07.157 "raid_level": "raid1", 00:12:07.157 "superblock": true, 00:12:07.157 "num_base_bdevs": 4, 00:12:07.157 "num_base_bdevs_discovered": 4, 00:12:07.157 "num_base_bdevs_operational": 4, 00:12:07.157 "process": { 00:12:07.157 "type": "rebuild", 00:12:07.157 "target": "spare", 00:12:07.157 "progress": { 00:12:07.157 "blocks": 20480, 00:12:07.157 "percent": 32 00:12:07.157 } 00:12:07.157 }, 00:12:07.157 "base_bdevs_list": [ 00:12:07.157 { 00:12:07.157 "name": "spare", 00:12:07.157 "uuid": "5c745e62-bee6-576a-8805-43110bfac419", 00:12:07.157 "is_configured": true, 00:12:07.157 "data_offset": 2048, 00:12:07.157 "data_size": 63488 00:12:07.157 }, 00:12:07.157 { 00:12:07.157 "name": "BaseBdev2", 00:12:07.157 "uuid": "bbddab58-0a6f-5817-8f2c-c111bf368a08", 00:12:07.157 "is_configured": true, 00:12:07.157 "data_offset": 2048, 00:12:07.157 "data_size": 63488 00:12:07.157 }, 00:12:07.157 { 00:12:07.157 "name": "BaseBdev3", 00:12:07.157 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:07.157 "is_configured": true, 00:12:07.157 "data_offset": 2048, 00:12:07.157 "data_size": 63488 00:12:07.157 }, 00:12:07.157 { 00:12:07.157 "name": "BaseBdev4", 00:12:07.157 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:07.157 "is_configured": true, 00:12:07.157 "data_offset": 2048, 00:12:07.157 "data_size": 63488 00:12:07.157 } 00:12:07.157 ] 00:12:07.157 }' 00:12:07.157 05:01:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:07.416 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.416 [2024-12-14 05:01:18.100048] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:07.416 [2024-12-14 05:01:18.241121] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3430 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.416 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.416 "name": "raid_bdev1", 00:12:07.416 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:07.416 "strip_size_kb": 0, 00:12:07.416 "state": "online", 00:12:07.416 "raid_level": "raid1", 00:12:07.416 "superblock": true, 00:12:07.416 "num_base_bdevs": 4, 00:12:07.416 "num_base_bdevs_discovered": 3, 00:12:07.416 "num_base_bdevs_operational": 3, 00:12:07.416 "process": { 00:12:07.416 "type": "rebuild", 00:12:07.416 "target": "spare", 00:12:07.416 "progress": { 00:12:07.416 "blocks": 24576, 00:12:07.416 "percent": 38 00:12:07.416 } 00:12:07.416 }, 00:12:07.416 "base_bdevs_list": [ 00:12:07.416 { 00:12:07.416 "name": "spare", 00:12:07.416 "uuid": "5c745e62-bee6-576a-8805-43110bfac419", 00:12:07.416 "is_configured": true, 00:12:07.416 "data_offset": 2048, 00:12:07.416 "data_size": 63488 00:12:07.416 }, 00:12:07.416 { 00:12:07.416 "name": null, 00:12:07.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.416 "is_configured": false, 00:12:07.416 "data_offset": 0, 00:12:07.416 "data_size": 63488 00:12:07.416 }, 00:12:07.416 { 00:12:07.416 "name": "BaseBdev3", 00:12:07.416 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:07.416 "is_configured": true, 00:12:07.416 "data_offset": 2048, 00:12:07.416 "data_size": 63488 00:12:07.416 }, 00:12:07.416 { 00:12:07.416 "name": "BaseBdev4", 00:12:07.416 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:07.416 "is_configured": true, 00:12:07.416 "data_offset": 2048, 00:12:07.416 "data_size": 63488 00:12:07.416 } 00:12:07.416 ] 00:12:07.416 }' 00:12:07.676 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.676 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:07.676 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.676 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:07.676 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=369 00:12:07.676 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:07.676 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:07.676 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.676 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:07.676 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:07.676 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:07.676 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.676 05:01:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.676 05:01:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.676 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.676 05:01:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.676 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.676 "name": "raid_bdev1", 00:12:07.676 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:07.676 "strip_size_kb": 0, 00:12:07.676 "state": "online", 00:12:07.676 "raid_level": "raid1", 00:12:07.676 "superblock": true, 00:12:07.676 "num_base_bdevs": 4, 00:12:07.676 "num_base_bdevs_discovered": 3, 00:12:07.676 "num_base_bdevs_operational": 3, 00:12:07.676 "process": { 00:12:07.676 "type": "rebuild", 00:12:07.676 "target": "spare", 00:12:07.676 "progress": { 00:12:07.676 "blocks": 26624, 00:12:07.676 "percent": 41 00:12:07.676 } 00:12:07.676 }, 00:12:07.676 "base_bdevs_list": [ 00:12:07.676 { 00:12:07.676 "name": "spare", 00:12:07.676 "uuid": "5c745e62-bee6-576a-8805-43110bfac419", 00:12:07.676 "is_configured": true, 00:12:07.676 "data_offset": 2048, 00:12:07.676 "data_size": 63488 00:12:07.676 }, 00:12:07.676 { 00:12:07.676 "name": null, 00:12:07.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.676 "is_configured": false, 00:12:07.676 "data_offset": 0, 00:12:07.676 "data_size": 63488 00:12:07.676 }, 00:12:07.676 { 00:12:07.676 "name": "BaseBdev3", 00:12:07.676 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:07.676 "is_configured": true, 00:12:07.676 "data_offset": 2048, 00:12:07.676 "data_size": 63488 00:12:07.676 }, 00:12:07.676 { 00:12:07.676 "name": "BaseBdev4", 00:12:07.676 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:07.676 "is_configured": true, 00:12:07.676 "data_offset": 2048, 00:12:07.676 "data_size": 63488 00:12:07.676 } 00:12:07.676 ] 00:12:07.676 }' 00:12:07.676 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.676 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:07.676 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.676 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:07.676 05:01:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:08.642 05:01:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:08.642 05:01:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:08.642 05:01:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.642 05:01:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:08.642 05:01:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:08.642 05:01:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.642 05:01:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.642 05:01:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.642 05:01:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.642 05:01:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.902 05:01:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.902 05:01:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.902 "name": "raid_bdev1", 00:12:08.902 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:08.902 "strip_size_kb": 0, 00:12:08.902 "state": "online", 00:12:08.902 "raid_level": "raid1", 00:12:08.902 "superblock": true, 00:12:08.902 "num_base_bdevs": 4, 00:12:08.902 "num_base_bdevs_discovered": 3, 00:12:08.902 "num_base_bdevs_operational": 3, 00:12:08.902 "process": { 00:12:08.902 "type": "rebuild", 00:12:08.902 "target": "spare", 00:12:08.902 "progress": { 00:12:08.902 "blocks": 49152, 00:12:08.902 "percent": 77 00:12:08.902 } 00:12:08.902 }, 00:12:08.902 "base_bdevs_list": [ 00:12:08.902 { 00:12:08.902 "name": "spare", 00:12:08.902 "uuid": "5c745e62-bee6-576a-8805-43110bfac419", 00:12:08.902 "is_configured": true, 00:12:08.902 "data_offset": 2048, 00:12:08.902 "data_size": 63488 00:12:08.902 }, 00:12:08.902 { 00:12:08.902 "name": null, 00:12:08.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.902 "is_configured": false, 00:12:08.902 "data_offset": 0, 00:12:08.902 "data_size": 63488 00:12:08.902 }, 00:12:08.902 { 00:12:08.902 "name": "BaseBdev3", 00:12:08.902 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:08.902 "is_configured": true, 00:12:08.902 "data_offset": 2048, 00:12:08.902 "data_size": 63488 00:12:08.902 }, 00:12:08.902 { 00:12:08.902 "name": "BaseBdev4", 00:12:08.902 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:08.902 "is_configured": true, 00:12:08.902 "data_offset": 2048, 00:12:08.902 "data_size": 63488 00:12:08.902 } 00:12:08.902 ] 00:12:08.902 }' 00:12:08.902 05:01:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.902 05:01:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:08.902 05:01:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.902 05:01:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:08.902 05:01:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:09.472 [2024-12-14 05:01:20.147538] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:09.472 [2024-12-14 05:01:20.147606] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:09.472 [2024-12-14 05:01:20.147706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.041 "name": "raid_bdev1", 00:12:10.041 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:10.041 "strip_size_kb": 0, 00:12:10.041 "state": "online", 00:12:10.041 "raid_level": "raid1", 00:12:10.041 "superblock": true, 00:12:10.041 "num_base_bdevs": 4, 00:12:10.041 "num_base_bdevs_discovered": 3, 00:12:10.041 "num_base_bdevs_operational": 3, 00:12:10.041 "base_bdevs_list": [ 00:12:10.041 { 00:12:10.041 "name": "spare", 00:12:10.041 "uuid": "5c745e62-bee6-576a-8805-43110bfac419", 00:12:10.041 "is_configured": true, 00:12:10.041 "data_offset": 2048, 00:12:10.041 "data_size": 63488 00:12:10.041 }, 00:12:10.041 { 00:12:10.041 "name": null, 00:12:10.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.041 "is_configured": false, 00:12:10.041 "data_offset": 0, 00:12:10.041 "data_size": 63488 00:12:10.041 }, 00:12:10.041 { 00:12:10.041 "name": "BaseBdev3", 00:12:10.041 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:10.041 "is_configured": true, 00:12:10.041 "data_offset": 2048, 00:12:10.041 "data_size": 63488 00:12:10.041 }, 00:12:10.041 { 00:12:10.041 "name": "BaseBdev4", 00:12:10.041 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:10.041 "is_configured": true, 00:12:10.041 "data_offset": 2048, 00:12:10.041 "data_size": 63488 00:12:10.041 } 00:12:10.041 ] 00:12:10.041 }' 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.041 "name": "raid_bdev1", 00:12:10.041 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:10.041 "strip_size_kb": 0, 00:12:10.041 "state": "online", 00:12:10.041 "raid_level": "raid1", 00:12:10.041 "superblock": true, 00:12:10.041 "num_base_bdevs": 4, 00:12:10.041 "num_base_bdevs_discovered": 3, 00:12:10.041 "num_base_bdevs_operational": 3, 00:12:10.041 "base_bdevs_list": [ 00:12:10.041 { 00:12:10.041 "name": "spare", 00:12:10.041 "uuid": "5c745e62-bee6-576a-8805-43110bfac419", 00:12:10.041 "is_configured": true, 00:12:10.041 "data_offset": 2048, 00:12:10.041 "data_size": 63488 00:12:10.041 }, 00:12:10.041 { 00:12:10.041 "name": null, 00:12:10.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.041 "is_configured": false, 00:12:10.041 "data_offset": 0, 00:12:10.041 "data_size": 63488 00:12:10.041 }, 00:12:10.041 { 00:12:10.041 "name": "BaseBdev3", 00:12:10.041 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:10.041 "is_configured": true, 00:12:10.041 "data_offset": 2048, 00:12:10.041 "data_size": 63488 00:12:10.041 }, 00:12:10.041 { 00:12:10.041 "name": "BaseBdev4", 00:12:10.041 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:10.041 "is_configured": true, 00:12:10.041 "data_offset": 2048, 00:12:10.041 "data_size": 63488 00:12:10.041 } 00:12:10.041 ] 00:12:10.041 }' 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:10.041 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.300 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:10.300 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:10.301 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.301 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.301 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.301 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.301 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:10.301 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.301 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.301 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.301 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.301 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.301 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.301 05:01:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.301 05:01:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.301 05:01:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.301 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.301 "name": "raid_bdev1", 00:12:10.301 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:10.301 "strip_size_kb": 0, 00:12:10.301 "state": "online", 00:12:10.301 "raid_level": "raid1", 00:12:10.301 "superblock": true, 00:12:10.301 "num_base_bdevs": 4, 00:12:10.301 "num_base_bdevs_discovered": 3, 00:12:10.301 "num_base_bdevs_operational": 3, 00:12:10.301 "base_bdevs_list": [ 00:12:10.301 { 00:12:10.301 "name": "spare", 00:12:10.301 "uuid": "5c745e62-bee6-576a-8805-43110bfac419", 00:12:10.301 "is_configured": true, 00:12:10.301 "data_offset": 2048, 00:12:10.301 "data_size": 63488 00:12:10.301 }, 00:12:10.301 { 00:12:10.301 "name": null, 00:12:10.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.301 "is_configured": false, 00:12:10.301 "data_offset": 0, 00:12:10.301 "data_size": 63488 00:12:10.301 }, 00:12:10.301 { 00:12:10.301 "name": "BaseBdev3", 00:12:10.301 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:10.301 "is_configured": true, 00:12:10.301 "data_offset": 2048, 00:12:10.301 "data_size": 63488 00:12:10.301 }, 00:12:10.301 { 00:12:10.301 "name": "BaseBdev4", 00:12:10.301 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:10.301 "is_configured": true, 00:12:10.301 "data_offset": 2048, 00:12:10.301 "data_size": 63488 00:12:10.301 } 00:12:10.301 ] 00:12:10.301 }' 00:12:10.301 05:01:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.301 05:01:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.560 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:10.560 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.560 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.560 [2024-12-14 05:01:21.337243] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:10.560 [2024-12-14 05:01:21.337307] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.560 [2024-12-14 05:01:21.337435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.560 [2024-12-14 05:01:21.337549] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.560 [2024-12-14 05:01:21.337622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:10.560 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.560 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.560 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.560 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.560 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:10.560 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.560 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:10.560 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:10.560 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:10.560 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:10.560 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:10.560 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:10.560 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:10.560 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:10.560 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:10.560 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:10.560 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:10.560 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:10.560 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:10.820 /dev/nbd0 00:12:10.820 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:10.820 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:10.820 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:10.820 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:10.820 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:10.820 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:10.820 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:10.820 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:10.820 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:10.820 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:10.820 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:10.820 1+0 records in 00:12:10.820 1+0 records out 00:12:10.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504163 s, 8.1 MB/s 00:12:10.820 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.820 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:10.820 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.820 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:10.820 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:10.820 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:10.820 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:10.820 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:11.081 /dev/nbd1 00:12:11.081 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:11.081 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:11.081 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:11.081 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:11.081 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:11.081 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:11.081 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:11.081 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:11.081 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:11.081 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:11.081 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:11.081 1+0 records in 00:12:11.081 1+0 records out 00:12:11.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330559 s, 12.4 MB/s 00:12:11.081 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.081 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:11.081 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.081 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:11.081 05:01:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:11.081 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:11.081 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:11.081 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:11.341 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:11.341 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:11.341 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:11.341 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:11.341 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:11.341 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.341 05:01:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:11.341 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:11.341 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:11.341 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:11.341 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.341 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.341 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:11.341 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:11.341 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.341 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.341 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:11.601 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:11.601 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:11.601 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:11.601 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.601 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.601 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:11.601 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:11.601 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.601 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:11.601 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:11.601 05:01:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.601 05:01:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.601 05:01:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.601 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:11.601 05:01:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.601 05:01:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.601 [2024-12-14 05:01:22.419455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:11.601 [2024-12-14 05:01:22.419564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.601 [2024-12-14 05:01:22.419599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:11.601 [2024-12-14 05:01:22.419633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.601 [2024-12-14 05:01:22.421730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.601 [2024-12-14 05:01:22.421800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:11.601 [2024-12-14 05:01:22.421904] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:11.601 [2024-12-14 05:01:22.421963] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:11.601 [2024-12-14 05:01:22.422083] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:11.601 [2024-12-14 05:01:22.422232] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:11.601 spare 00:12:11.601 05:01:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.601 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:11.602 05:01:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.602 05:01:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.862 [2024-12-14 05:01:22.522149] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:12:11.862 [2024-12-14 05:01:22.522214] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:11.862 [2024-12-14 05:01:22.522527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:11.862 [2024-12-14 05:01:22.522717] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:12:11.862 [2024-12-14 05:01:22.522759] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:12:11.862 [2024-12-14 05:01:22.522927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.862 05:01:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.862 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:11.862 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.862 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.862 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.862 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.862 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:11.862 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.862 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.862 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.862 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.862 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.862 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.862 05:01:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.862 05:01:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.862 05:01:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.862 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.862 "name": "raid_bdev1", 00:12:11.862 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:11.862 "strip_size_kb": 0, 00:12:11.862 "state": "online", 00:12:11.862 "raid_level": "raid1", 00:12:11.862 "superblock": true, 00:12:11.862 "num_base_bdevs": 4, 00:12:11.862 "num_base_bdevs_discovered": 3, 00:12:11.862 "num_base_bdevs_operational": 3, 00:12:11.862 "base_bdevs_list": [ 00:12:11.862 { 00:12:11.862 "name": "spare", 00:12:11.862 "uuid": "5c745e62-bee6-576a-8805-43110bfac419", 00:12:11.862 "is_configured": true, 00:12:11.862 "data_offset": 2048, 00:12:11.862 "data_size": 63488 00:12:11.862 }, 00:12:11.862 { 00:12:11.862 "name": null, 00:12:11.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.862 "is_configured": false, 00:12:11.862 "data_offset": 2048, 00:12:11.862 "data_size": 63488 00:12:11.862 }, 00:12:11.862 { 00:12:11.862 "name": "BaseBdev3", 00:12:11.862 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:11.862 "is_configured": true, 00:12:11.862 "data_offset": 2048, 00:12:11.862 "data_size": 63488 00:12:11.862 }, 00:12:11.862 { 00:12:11.862 "name": "BaseBdev4", 00:12:11.862 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:11.862 "is_configured": true, 00:12:11.862 "data_offset": 2048, 00:12:11.862 "data_size": 63488 00:12:11.862 } 00:12:11.862 ] 00:12:11.862 }' 00:12:11.862 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.862 05:01:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.122 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:12.123 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.123 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:12.123 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:12.123 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.123 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.123 05:01:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.123 05:01:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.123 05:01:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.123 05:01:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.383 "name": "raid_bdev1", 00:12:12.383 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:12.383 "strip_size_kb": 0, 00:12:12.383 "state": "online", 00:12:12.383 "raid_level": "raid1", 00:12:12.383 "superblock": true, 00:12:12.383 "num_base_bdevs": 4, 00:12:12.383 "num_base_bdevs_discovered": 3, 00:12:12.383 "num_base_bdevs_operational": 3, 00:12:12.383 "base_bdevs_list": [ 00:12:12.383 { 00:12:12.383 "name": "spare", 00:12:12.383 "uuid": "5c745e62-bee6-576a-8805-43110bfac419", 00:12:12.383 "is_configured": true, 00:12:12.383 "data_offset": 2048, 00:12:12.383 "data_size": 63488 00:12:12.383 }, 00:12:12.383 { 00:12:12.383 "name": null, 00:12:12.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.383 "is_configured": false, 00:12:12.383 "data_offset": 2048, 00:12:12.383 "data_size": 63488 00:12:12.383 }, 00:12:12.383 { 00:12:12.383 "name": "BaseBdev3", 00:12:12.383 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:12.383 "is_configured": true, 00:12:12.383 "data_offset": 2048, 00:12:12.383 "data_size": 63488 00:12:12.383 }, 00:12:12.383 { 00:12:12.383 "name": "BaseBdev4", 00:12:12.383 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:12.383 "is_configured": true, 00:12:12.383 "data_offset": 2048, 00:12:12.383 "data_size": 63488 00:12:12.383 } 00:12:12.383 ] 00:12:12.383 }' 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.383 [2024-12-14 05:01:23.178240] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.383 "name": "raid_bdev1", 00:12:12.383 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:12.383 "strip_size_kb": 0, 00:12:12.383 "state": "online", 00:12:12.383 "raid_level": "raid1", 00:12:12.383 "superblock": true, 00:12:12.383 "num_base_bdevs": 4, 00:12:12.383 "num_base_bdevs_discovered": 2, 00:12:12.383 "num_base_bdevs_operational": 2, 00:12:12.383 "base_bdevs_list": [ 00:12:12.383 { 00:12:12.383 "name": null, 00:12:12.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.383 "is_configured": false, 00:12:12.383 "data_offset": 0, 00:12:12.383 "data_size": 63488 00:12:12.383 }, 00:12:12.383 { 00:12:12.383 "name": null, 00:12:12.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.383 "is_configured": false, 00:12:12.383 "data_offset": 2048, 00:12:12.383 "data_size": 63488 00:12:12.383 }, 00:12:12.383 { 00:12:12.383 "name": "BaseBdev3", 00:12:12.383 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:12.383 "is_configured": true, 00:12:12.383 "data_offset": 2048, 00:12:12.383 "data_size": 63488 00:12:12.383 }, 00:12:12.383 { 00:12:12.383 "name": "BaseBdev4", 00:12:12.383 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:12.383 "is_configured": true, 00:12:12.383 "data_offset": 2048, 00:12:12.383 "data_size": 63488 00:12:12.383 } 00:12:12.383 ] 00:12:12.383 }' 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.383 05:01:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.953 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:12.953 05:01:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.953 05:01:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.953 [2024-12-14 05:01:23.633517] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:12.953 [2024-12-14 05:01:23.633741] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:12.953 [2024-12-14 05:01:23.633811] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:12.953 [2024-12-14 05:01:23.633863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:12.953 [2024-12-14 05:01:23.637083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:12.953 [2024-12-14 05:01:23.638924] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:12.953 05:01:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.953 05:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:13.893 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.893 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.893 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.893 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.893 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.893 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.893 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.893 05:01:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.893 05:01:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.893 05:01:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.893 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.893 "name": "raid_bdev1", 00:12:13.893 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:13.893 "strip_size_kb": 0, 00:12:13.893 "state": "online", 00:12:13.893 "raid_level": "raid1", 00:12:13.893 "superblock": true, 00:12:13.893 "num_base_bdevs": 4, 00:12:13.893 "num_base_bdevs_discovered": 3, 00:12:13.893 "num_base_bdevs_operational": 3, 00:12:13.893 "process": { 00:12:13.893 "type": "rebuild", 00:12:13.893 "target": "spare", 00:12:13.893 "progress": { 00:12:13.893 "blocks": 20480, 00:12:13.893 "percent": 32 00:12:13.893 } 00:12:13.893 }, 00:12:13.893 "base_bdevs_list": [ 00:12:13.893 { 00:12:13.893 "name": "spare", 00:12:13.893 "uuid": "5c745e62-bee6-576a-8805-43110bfac419", 00:12:13.893 "is_configured": true, 00:12:13.893 "data_offset": 2048, 00:12:13.893 "data_size": 63488 00:12:13.893 }, 00:12:13.893 { 00:12:13.893 "name": null, 00:12:13.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.893 "is_configured": false, 00:12:13.893 "data_offset": 2048, 00:12:13.893 "data_size": 63488 00:12:13.893 }, 00:12:13.893 { 00:12:13.893 "name": "BaseBdev3", 00:12:13.893 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:13.893 "is_configured": true, 00:12:13.893 "data_offset": 2048, 00:12:13.893 "data_size": 63488 00:12:13.893 }, 00:12:13.893 { 00:12:13.893 "name": "BaseBdev4", 00:12:13.893 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:13.893 "is_configured": true, 00:12:13.893 "data_offset": 2048, 00:12:13.893 "data_size": 63488 00:12:13.893 } 00:12:13.893 ] 00:12:13.893 }' 00:12:13.893 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.893 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:13.893 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.893 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:13.893 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:13.893 05:01:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.893 05:01:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.152 [2024-12-14 05:01:24.777695] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:14.152 [2024-12-14 05:01:24.842889] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:14.152 [2024-12-14 05:01:24.843003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.152 [2024-12-14 05:01:24.843038] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:14.152 [2024-12-14 05:01:24.843060] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:14.152 05:01:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.152 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:14.152 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.152 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.152 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.152 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.152 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:14.152 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.152 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.152 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.152 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.152 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.152 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.152 05:01:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.152 05:01:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.152 05:01:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.152 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.152 "name": "raid_bdev1", 00:12:14.152 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:14.152 "strip_size_kb": 0, 00:12:14.152 "state": "online", 00:12:14.152 "raid_level": "raid1", 00:12:14.152 "superblock": true, 00:12:14.152 "num_base_bdevs": 4, 00:12:14.152 "num_base_bdevs_discovered": 2, 00:12:14.152 "num_base_bdevs_operational": 2, 00:12:14.152 "base_bdevs_list": [ 00:12:14.152 { 00:12:14.152 "name": null, 00:12:14.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.152 "is_configured": false, 00:12:14.152 "data_offset": 0, 00:12:14.152 "data_size": 63488 00:12:14.152 }, 00:12:14.152 { 00:12:14.152 "name": null, 00:12:14.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.152 "is_configured": false, 00:12:14.152 "data_offset": 2048, 00:12:14.152 "data_size": 63488 00:12:14.152 }, 00:12:14.152 { 00:12:14.152 "name": "BaseBdev3", 00:12:14.152 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:14.152 "is_configured": true, 00:12:14.152 "data_offset": 2048, 00:12:14.152 "data_size": 63488 00:12:14.152 }, 00:12:14.152 { 00:12:14.152 "name": "BaseBdev4", 00:12:14.152 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:14.152 "is_configured": true, 00:12:14.152 "data_offset": 2048, 00:12:14.152 "data_size": 63488 00:12:14.152 } 00:12:14.152 ] 00:12:14.152 }' 00:12:14.152 05:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.152 05:01:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.412 05:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:14.412 05:01:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.412 05:01:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.412 [2024-12-14 05:01:25.270043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:14.412 [2024-12-14 05:01:25.270150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.412 [2024-12-14 05:01:25.270208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:12:14.412 [2024-12-14 05:01:25.270243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.412 [2024-12-14 05:01:25.270697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.412 [2024-12-14 05:01:25.270755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:14.412 [2024-12-14 05:01:25.270852] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:14.412 [2024-12-14 05:01:25.270896] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:14.412 [2024-12-14 05:01:25.270937] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:14.412 [2024-12-14 05:01:25.271015] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:14.412 [2024-12-14 05:01:25.273954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:14.412 [2024-12-14 05:01:25.275804] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:14.412 spare 00:12:14.412 05:01:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.412 05:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:15.794 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.795 "name": "raid_bdev1", 00:12:15.795 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:15.795 "strip_size_kb": 0, 00:12:15.795 "state": "online", 00:12:15.795 "raid_level": "raid1", 00:12:15.795 "superblock": true, 00:12:15.795 "num_base_bdevs": 4, 00:12:15.795 "num_base_bdevs_discovered": 3, 00:12:15.795 "num_base_bdevs_operational": 3, 00:12:15.795 "process": { 00:12:15.795 "type": "rebuild", 00:12:15.795 "target": "spare", 00:12:15.795 "progress": { 00:12:15.795 "blocks": 20480, 00:12:15.795 "percent": 32 00:12:15.795 } 00:12:15.795 }, 00:12:15.795 "base_bdevs_list": [ 00:12:15.795 { 00:12:15.795 "name": "spare", 00:12:15.795 "uuid": "5c745e62-bee6-576a-8805-43110bfac419", 00:12:15.795 "is_configured": true, 00:12:15.795 "data_offset": 2048, 00:12:15.795 "data_size": 63488 00:12:15.795 }, 00:12:15.795 { 00:12:15.795 "name": null, 00:12:15.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.795 "is_configured": false, 00:12:15.795 "data_offset": 2048, 00:12:15.795 "data_size": 63488 00:12:15.795 }, 00:12:15.795 { 00:12:15.795 "name": "BaseBdev3", 00:12:15.795 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:15.795 "is_configured": true, 00:12:15.795 "data_offset": 2048, 00:12:15.795 "data_size": 63488 00:12:15.795 }, 00:12:15.795 { 00:12:15.795 "name": "BaseBdev4", 00:12:15.795 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:15.795 "is_configured": true, 00:12:15.795 "data_offset": 2048, 00:12:15.795 "data_size": 63488 00:12:15.795 } 00:12:15.795 ] 00:12:15.795 }' 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.795 [2024-12-14 05:01:26.434442] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:15.795 [2024-12-14 05:01:26.479699] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:15.795 [2024-12-14 05:01:26.479810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.795 [2024-12-14 05:01:26.479849] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:15.795 [2024-12-14 05:01:26.479869] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.795 "name": "raid_bdev1", 00:12:15.795 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:15.795 "strip_size_kb": 0, 00:12:15.795 "state": "online", 00:12:15.795 "raid_level": "raid1", 00:12:15.795 "superblock": true, 00:12:15.795 "num_base_bdevs": 4, 00:12:15.795 "num_base_bdevs_discovered": 2, 00:12:15.795 "num_base_bdevs_operational": 2, 00:12:15.795 "base_bdevs_list": [ 00:12:15.795 { 00:12:15.795 "name": null, 00:12:15.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.795 "is_configured": false, 00:12:15.795 "data_offset": 0, 00:12:15.795 "data_size": 63488 00:12:15.795 }, 00:12:15.795 { 00:12:15.795 "name": null, 00:12:15.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.795 "is_configured": false, 00:12:15.795 "data_offset": 2048, 00:12:15.795 "data_size": 63488 00:12:15.795 }, 00:12:15.795 { 00:12:15.795 "name": "BaseBdev3", 00:12:15.795 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:15.795 "is_configured": true, 00:12:15.795 "data_offset": 2048, 00:12:15.795 "data_size": 63488 00:12:15.795 }, 00:12:15.795 { 00:12:15.795 "name": "BaseBdev4", 00:12:15.795 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:15.795 "is_configured": true, 00:12:15.795 "data_offset": 2048, 00:12:15.795 "data_size": 63488 00:12:15.795 } 00:12:15.795 ] 00:12:15.795 }' 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.795 05:01:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.055 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:16.315 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.315 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:16.315 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:16.315 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.315 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.315 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.315 05:01:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.315 05:01:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.315 05:01:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.315 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.315 "name": "raid_bdev1", 00:12:16.315 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:16.315 "strip_size_kb": 0, 00:12:16.315 "state": "online", 00:12:16.315 "raid_level": "raid1", 00:12:16.315 "superblock": true, 00:12:16.315 "num_base_bdevs": 4, 00:12:16.315 "num_base_bdevs_discovered": 2, 00:12:16.315 "num_base_bdevs_operational": 2, 00:12:16.315 "base_bdevs_list": [ 00:12:16.315 { 00:12:16.315 "name": null, 00:12:16.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.315 "is_configured": false, 00:12:16.315 "data_offset": 0, 00:12:16.315 "data_size": 63488 00:12:16.315 }, 00:12:16.315 { 00:12:16.315 "name": null, 00:12:16.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.315 "is_configured": false, 00:12:16.315 "data_offset": 2048, 00:12:16.315 "data_size": 63488 00:12:16.315 }, 00:12:16.315 { 00:12:16.315 "name": "BaseBdev3", 00:12:16.315 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:16.315 "is_configured": true, 00:12:16.315 "data_offset": 2048, 00:12:16.315 "data_size": 63488 00:12:16.315 }, 00:12:16.315 { 00:12:16.315 "name": "BaseBdev4", 00:12:16.315 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:16.315 "is_configured": true, 00:12:16.315 "data_offset": 2048, 00:12:16.315 "data_size": 63488 00:12:16.315 } 00:12:16.315 ] 00:12:16.315 }' 00:12:16.315 05:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.315 05:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:16.315 05:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.315 05:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:16.315 05:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:16.315 05:01:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.315 05:01:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.315 05:01:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.315 05:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:16.315 05:01:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.315 05:01:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.315 [2024-12-14 05:01:27.106471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:16.315 [2024-12-14 05:01:27.106572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.315 [2024-12-14 05:01:27.106610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:12:16.315 [2024-12-14 05:01:27.106638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.315 [2024-12-14 05:01:27.107316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.315 [2024-12-14 05:01:27.107376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:16.315 [2024-12-14 05:01:27.107507] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:16.315 [2024-12-14 05:01:27.107522] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:16.315 [2024-12-14 05:01:27.107531] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:16.315 [2024-12-14 05:01:27.107547] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:16.315 BaseBdev1 00:12:16.315 05:01:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.315 05:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:17.255 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:17.255 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.255 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.255 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.255 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.255 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:17.255 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.255 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.255 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.255 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.255 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.255 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.255 05:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.255 05:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.515 05:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.515 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.515 "name": "raid_bdev1", 00:12:17.515 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:17.515 "strip_size_kb": 0, 00:12:17.515 "state": "online", 00:12:17.515 "raid_level": "raid1", 00:12:17.515 "superblock": true, 00:12:17.515 "num_base_bdevs": 4, 00:12:17.515 "num_base_bdevs_discovered": 2, 00:12:17.515 "num_base_bdevs_operational": 2, 00:12:17.515 "base_bdevs_list": [ 00:12:17.515 { 00:12:17.515 "name": null, 00:12:17.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.515 "is_configured": false, 00:12:17.515 "data_offset": 0, 00:12:17.515 "data_size": 63488 00:12:17.515 }, 00:12:17.515 { 00:12:17.515 "name": null, 00:12:17.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.515 "is_configured": false, 00:12:17.515 "data_offset": 2048, 00:12:17.515 "data_size": 63488 00:12:17.515 }, 00:12:17.515 { 00:12:17.515 "name": "BaseBdev3", 00:12:17.515 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:17.515 "is_configured": true, 00:12:17.515 "data_offset": 2048, 00:12:17.515 "data_size": 63488 00:12:17.515 }, 00:12:17.515 { 00:12:17.515 "name": "BaseBdev4", 00:12:17.515 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:17.515 "is_configured": true, 00:12:17.515 "data_offset": 2048, 00:12:17.515 "data_size": 63488 00:12:17.515 } 00:12:17.515 ] 00:12:17.515 }' 00:12:17.515 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.515 05:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.775 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:17.775 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.775 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:17.775 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:17.775 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.775 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.775 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.775 05:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.775 05:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.775 05:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.775 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.775 "name": "raid_bdev1", 00:12:17.775 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:17.775 "strip_size_kb": 0, 00:12:17.775 "state": "online", 00:12:17.775 "raid_level": "raid1", 00:12:17.775 "superblock": true, 00:12:17.775 "num_base_bdevs": 4, 00:12:17.775 "num_base_bdevs_discovered": 2, 00:12:17.775 "num_base_bdevs_operational": 2, 00:12:17.775 "base_bdevs_list": [ 00:12:17.775 { 00:12:17.775 "name": null, 00:12:17.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.775 "is_configured": false, 00:12:17.775 "data_offset": 0, 00:12:17.775 "data_size": 63488 00:12:17.775 }, 00:12:17.775 { 00:12:17.775 "name": null, 00:12:17.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.775 "is_configured": false, 00:12:17.775 "data_offset": 2048, 00:12:17.775 "data_size": 63488 00:12:17.775 }, 00:12:17.775 { 00:12:17.775 "name": "BaseBdev3", 00:12:17.775 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:17.775 "is_configured": true, 00:12:17.775 "data_offset": 2048, 00:12:17.775 "data_size": 63488 00:12:17.775 }, 00:12:17.775 { 00:12:17.775 "name": "BaseBdev4", 00:12:17.775 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:17.775 "is_configured": true, 00:12:17.775 "data_offset": 2048, 00:12:17.775 "data_size": 63488 00:12:17.775 } 00:12:17.775 ] 00:12:17.775 }' 00:12:17.775 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.775 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:17.775 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.035 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:18.035 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:18.035 05:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:18.035 05:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:18.035 05:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:18.035 05:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.035 05:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:18.035 05:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.035 05:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:18.035 05:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.035 05:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.035 [2024-12-14 05:01:28.691752] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.035 [2024-12-14 05:01:28.691935] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:18.035 [2024-12-14 05:01:28.692001] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:18.035 request: 00:12:18.035 { 00:12:18.035 "base_bdev": "BaseBdev1", 00:12:18.035 "raid_bdev": "raid_bdev1", 00:12:18.035 "method": "bdev_raid_add_base_bdev", 00:12:18.035 "req_id": 1 00:12:18.035 } 00:12:18.035 Got JSON-RPC error response 00:12:18.035 response: 00:12:18.035 { 00:12:18.035 "code": -22, 00:12:18.035 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:18.035 } 00:12:18.035 05:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:18.035 05:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:18.035 05:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:18.035 05:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:18.035 05:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:18.035 05:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:18.975 05:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:18.975 05:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.975 05:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.975 05:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.975 05:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.975 05:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:18.975 05:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.975 05:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.975 05:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.975 05:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.975 05:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.975 05:01:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.975 05:01:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.975 05:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.975 05:01:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.975 05:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.975 "name": "raid_bdev1", 00:12:18.975 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:18.975 "strip_size_kb": 0, 00:12:18.975 "state": "online", 00:12:18.975 "raid_level": "raid1", 00:12:18.975 "superblock": true, 00:12:18.975 "num_base_bdevs": 4, 00:12:18.975 "num_base_bdevs_discovered": 2, 00:12:18.975 "num_base_bdevs_operational": 2, 00:12:18.975 "base_bdevs_list": [ 00:12:18.975 { 00:12:18.975 "name": null, 00:12:18.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.975 "is_configured": false, 00:12:18.975 "data_offset": 0, 00:12:18.975 "data_size": 63488 00:12:18.975 }, 00:12:18.975 { 00:12:18.975 "name": null, 00:12:18.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.975 "is_configured": false, 00:12:18.975 "data_offset": 2048, 00:12:18.975 "data_size": 63488 00:12:18.975 }, 00:12:18.975 { 00:12:18.975 "name": "BaseBdev3", 00:12:18.975 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:18.975 "is_configured": true, 00:12:18.975 "data_offset": 2048, 00:12:18.975 "data_size": 63488 00:12:18.975 }, 00:12:18.975 { 00:12:18.975 "name": "BaseBdev4", 00:12:18.975 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:18.975 "is_configured": true, 00:12:18.975 "data_offset": 2048, 00:12:18.975 "data_size": 63488 00:12:18.975 } 00:12:18.975 ] 00:12:18.975 }' 00:12:18.975 05:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.975 05:01:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.544 05:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:19.544 05:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.544 05:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:19.544 05:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:19.544 05:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.544 05:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.544 05:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.544 05:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.544 05:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.544 05:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.544 05:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.544 "name": "raid_bdev1", 00:12:19.544 "uuid": "d92207d0-5714-4b05-b3c6-04d8442a1835", 00:12:19.544 "strip_size_kb": 0, 00:12:19.544 "state": "online", 00:12:19.544 "raid_level": "raid1", 00:12:19.544 "superblock": true, 00:12:19.544 "num_base_bdevs": 4, 00:12:19.544 "num_base_bdevs_discovered": 2, 00:12:19.545 "num_base_bdevs_operational": 2, 00:12:19.545 "base_bdevs_list": [ 00:12:19.545 { 00:12:19.545 "name": null, 00:12:19.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.545 "is_configured": false, 00:12:19.545 "data_offset": 0, 00:12:19.545 "data_size": 63488 00:12:19.545 }, 00:12:19.545 { 00:12:19.545 "name": null, 00:12:19.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.545 "is_configured": false, 00:12:19.545 "data_offset": 2048, 00:12:19.545 "data_size": 63488 00:12:19.545 }, 00:12:19.545 { 00:12:19.545 "name": "BaseBdev3", 00:12:19.545 "uuid": "c85b4fcf-1ad0-5118-94ca-342a8f79daaa", 00:12:19.545 "is_configured": true, 00:12:19.545 "data_offset": 2048, 00:12:19.545 "data_size": 63488 00:12:19.545 }, 00:12:19.545 { 00:12:19.545 "name": "BaseBdev4", 00:12:19.545 "uuid": "05e8fe10-7389-54fc-beb8-805b30d40725", 00:12:19.545 "is_configured": true, 00:12:19.545 "data_offset": 2048, 00:12:19.545 "data_size": 63488 00:12:19.545 } 00:12:19.545 ] 00:12:19.545 }' 00:12:19.545 05:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.545 05:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:19.545 05:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.545 05:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:19.545 05:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88594 00:12:19.545 05:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 88594 ']' 00:12:19.545 05:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 88594 00:12:19.545 05:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:19.545 05:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:19.545 05:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88594 00:12:19.545 05:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:19.545 killing process with pid 88594 00:12:19.545 Received shutdown signal, test time was about 60.000000 seconds 00:12:19.545 00:12:19.545 Latency(us) 00:12:19.545 [2024-12-14T05:01:30.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.545 [2024-12-14T05:01:30.428Z] =================================================================================================================== 00:12:19.545 [2024-12-14T05:01:30.428Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:19.545 05:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:19.545 05:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88594' 00:12:19.545 05:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 88594 00:12:19.545 [2024-12-14 05:01:30.307889] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:19.545 [2024-12-14 05:01:30.308003] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:19.545 [2024-12-14 05:01:30.308066] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:19.545 [2024-12-14 05:01:30.308077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:12:19.545 05:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 88594 00:12:19.545 [2024-12-14 05:01:30.357712] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:19.805 05:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:19.805 00:12:19.805 real 0m23.394s 00:12:19.805 user 0m28.228s 00:12:19.805 sys 0m3.727s 00:12:19.805 05:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:19.805 05:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.805 ************************************ 00:12:19.805 END TEST raid_rebuild_test_sb 00:12:19.805 ************************************ 00:12:19.805 05:01:30 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:12:19.805 05:01:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:19.805 05:01:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:19.805 05:01:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:19.805 ************************************ 00:12:19.805 START TEST raid_rebuild_test_io 00:12:19.805 ************************************ 00:12:19.805 05:01:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:12:19.805 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:19.805 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:19.805 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:19.805 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:19.805 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:19.805 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:19.805 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:19.805 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:19.805 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89332 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89332 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 89332 ']' 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:20.066 05:01:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.066 [2024-12-14 05:01:30.778176] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:20.066 [2024-12-14 05:01:30.778373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89332 ] 00:12:20.066 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:20.066 Zero copy mechanism will not be used. 00:12:20.066 [2024-12-14 05:01:30.937582] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.326 [2024-12-14 05:01:30.984202] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.326 [2024-12-14 05:01:31.026107] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.326 [2024-12-14 05:01:31.026153] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.896 BaseBdev1_malloc 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.896 [2024-12-14 05:01:31.660380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:20.896 [2024-12-14 05:01:31.660511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.896 [2024-12-14 05:01:31.660557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:20.896 [2024-12-14 05:01:31.660594] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.896 [2024-12-14 05:01:31.662699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.896 [2024-12-14 05:01:31.662768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:20.896 BaseBdev1 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.896 BaseBdev2_malloc 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.896 [2024-12-14 05:01:31.706057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:20.896 [2024-12-14 05:01:31.706219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.896 [2024-12-14 05:01:31.706280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:20.896 [2024-12-14 05:01:31.706309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.896 [2024-12-14 05:01:31.710553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.896 [2024-12-14 05:01:31.710616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:20.896 BaseBdev2 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.896 BaseBdev3_malloc 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.896 [2024-12-14 05:01:31.736701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:20.896 [2024-12-14 05:01:31.736794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.896 [2024-12-14 05:01:31.736840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:20.896 [2024-12-14 05:01:31.736870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.896 [2024-12-14 05:01:31.738908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.896 [2024-12-14 05:01:31.738993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:20.896 BaseBdev3 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.896 BaseBdev4_malloc 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.896 [2024-12-14 05:01:31.765305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:20.896 [2024-12-14 05:01:31.765411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.896 [2024-12-14 05:01:31.765453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:20.896 [2024-12-14 05:01:31.765490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.896 [2024-12-14 05:01:31.767537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.896 [2024-12-14 05:01:31.767606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:20.896 BaseBdev4 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.896 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.156 spare_malloc 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.156 spare_delay 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.156 [2024-12-14 05:01:31.806013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:21.156 [2024-12-14 05:01:31.806106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.156 [2024-12-14 05:01:31.806132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:21.156 [2024-12-14 05:01:31.806141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.156 [2024-12-14 05:01:31.808219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.156 [2024-12-14 05:01:31.808289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:21.156 spare 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.156 [2024-12-14 05:01:31.818062] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.156 [2024-12-14 05:01:31.819847] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.156 [2024-12-14 05:01:31.819960] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:21.156 [2024-12-14 05:01:31.820025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:21.156 [2024-12-14 05:01:31.820128] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:21.156 [2024-12-14 05:01:31.820191] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:21.156 [2024-12-14 05:01:31.820454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:21.156 [2024-12-14 05:01:31.820649] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:21.156 [2024-12-14 05:01:31.820699] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:21.156 [2024-12-14 05:01:31.820867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.156 "name": "raid_bdev1", 00:12:21.156 "uuid": "36cb8af7-f076-4cfd-ad41-529eccb03745", 00:12:21.156 "strip_size_kb": 0, 00:12:21.156 "state": "online", 00:12:21.156 "raid_level": "raid1", 00:12:21.156 "superblock": false, 00:12:21.156 "num_base_bdevs": 4, 00:12:21.156 "num_base_bdevs_discovered": 4, 00:12:21.156 "num_base_bdevs_operational": 4, 00:12:21.156 "base_bdevs_list": [ 00:12:21.156 { 00:12:21.156 "name": "BaseBdev1", 00:12:21.156 "uuid": "5cc2cd18-99f5-5a1d-b761-c94d0b861df7", 00:12:21.156 "is_configured": true, 00:12:21.156 "data_offset": 0, 00:12:21.156 "data_size": 65536 00:12:21.156 }, 00:12:21.156 { 00:12:21.156 "name": "BaseBdev2", 00:12:21.156 "uuid": "205a678f-edcf-552d-ba71-d67542901965", 00:12:21.156 "is_configured": true, 00:12:21.156 "data_offset": 0, 00:12:21.156 "data_size": 65536 00:12:21.156 }, 00:12:21.156 { 00:12:21.156 "name": "BaseBdev3", 00:12:21.156 "uuid": "df963158-f670-5a74-a165-e8a929ddc642", 00:12:21.156 "is_configured": true, 00:12:21.156 "data_offset": 0, 00:12:21.156 "data_size": 65536 00:12:21.156 }, 00:12:21.156 { 00:12:21.156 "name": "BaseBdev4", 00:12:21.156 "uuid": "6e369241-80f6-522f-af25-71480ca24f1c", 00:12:21.156 "is_configured": true, 00:12:21.156 "data_offset": 0, 00:12:21.156 "data_size": 65536 00:12:21.156 } 00:12:21.156 ] 00:12:21.156 }' 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.156 05:01:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.415 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:21.415 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:21.415 05:01:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.415 05:01:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.415 [2024-12-14 05:01:32.281529] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.675 [2024-12-14 05:01:32.377056] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.675 "name": "raid_bdev1", 00:12:21.675 "uuid": "36cb8af7-f076-4cfd-ad41-529eccb03745", 00:12:21.675 "strip_size_kb": 0, 00:12:21.675 "state": "online", 00:12:21.675 "raid_level": "raid1", 00:12:21.675 "superblock": false, 00:12:21.675 "num_base_bdevs": 4, 00:12:21.675 "num_base_bdevs_discovered": 3, 00:12:21.675 "num_base_bdevs_operational": 3, 00:12:21.675 "base_bdevs_list": [ 00:12:21.675 { 00:12:21.675 "name": null, 00:12:21.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.675 "is_configured": false, 00:12:21.675 "data_offset": 0, 00:12:21.675 "data_size": 65536 00:12:21.675 }, 00:12:21.675 { 00:12:21.675 "name": "BaseBdev2", 00:12:21.675 "uuid": "205a678f-edcf-552d-ba71-d67542901965", 00:12:21.675 "is_configured": true, 00:12:21.675 "data_offset": 0, 00:12:21.675 "data_size": 65536 00:12:21.675 }, 00:12:21.675 { 00:12:21.675 "name": "BaseBdev3", 00:12:21.675 "uuid": "df963158-f670-5a74-a165-e8a929ddc642", 00:12:21.675 "is_configured": true, 00:12:21.675 "data_offset": 0, 00:12:21.675 "data_size": 65536 00:12:21.675 }, 00:12:21.675 { 00:12:21.675 "name": "BaseBdev4", 00:12:21.675 "uuid": "6e369241-80f6-522f-af25-71480ca24f1c", 00:12:21.675 "is_configured": true, 00:12:21.675 "data_offset": 0, 00:12:21.675 "data_size": 65536 00:12:21.675 } 00:12:21.675 ] 00:12:21.675 }' 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.675 05:01:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.675 [2024-12-14 05:01:32.466910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:21.675 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:21.675 Zero copy mechanism will not be used. 00:12:21.675 Running I/O for 60 seconds... 00:12:22.245 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:22.245 05:01:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.245 05:01:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.245 [2024-12-14 05:01:32.871128] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:22.245 05:01:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.245 05:01:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:22.245 [2024-12-14 05:01:32.922532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:22.245 [2024-12-14 05:01:32.924604] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:22.245 [2024-12-14 05:01:33.034110] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:22.245 [2024-12-14 05:01:33.034781] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:22.505 [2024-12-14 05:01:33.151325] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:22.505 [2024-12-14 05:01:33.151735] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:22.765 [2024-12-14 05:01:33.398826] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:22.765 225.00 IOPS, 675.00 MiB/s [2024-12-14T05:01:33.648Z] [2024-12-14 05:01:33.610818] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:22.765 [2024-12-14 05:01:33.611542] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:23.336 05:01:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:23.336 05:01:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.336 05:01:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:23.336 05:01:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:23.336 05:01:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.336 05:01:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.336 05:01:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.336 05:01:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.336 05:01:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.336 05:01:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.336 [2024-12-14 05:01:33.946931] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:23.336 05:01:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.336 "name": "raid_bdev1", 00:12:23.336 "uuid": "36cb8af7-f076-4cfd-ad41-529eccb03745", 00:12:23.336 "strip_size_kb": 0, 00:12:23.336 "state": "online", 00:12:23.336 "raid_level": "raid1", 00:12:23.336 "superblock": false, 00:12:23.336 "num_base_bdevs": 4, 00:12:23.336 "num_base_bdevs_discovered": 4, 00:12:23.336 "num_base_bdevs_operational": 4, 00:12:23.336 "process": { 00:12:23.336 "type": "rebuild", 00:12:23.336 "target": "spare", 00:12:23.336 "progress": { 00:12:23.336 "blocks": 12288, 00:12:23.336 "percent": 18 00:12:23.336 } 00:12:23.336 }, 00:12:23.336 "base_bdevs_list": [ 00:12:23.336 { 00:12:23.336 "name": "spare", 00:12:23.336 "uuid": "13d0df30-2c80-5e05-9f27-640e27b4cd64", 00:12:23.336 "is_configured": true, 00:12:23.336 "data_offset": 0, 00:12:23.336 "data_size": 65536 00:12:23.336 }, 00:12:23.336 { 00:12:23.336 "name": "BaseBdev2", 00:12:23.336 "uuid": "205a678f-edcf-552d-ba71-d67542901965", 00:12:23.336 "is_configured": true, 00:12:23.336 "data_offset": 0, 00:12:23.336 "data_size": 65536 00:12:23.336 }, 00:12:23.336 { 00:12:23.336 "name": "BaseBdev3", 00:12:23.336 "uuid": "df963158-f670-5a74-a165-e8a929ddc642", 00:12:23.336 "is_configured": true, 00:12:23.336 "data_offset": 0, 00:12:23.336 "data_size": 65536 00:12:23.336 }, 00:12:23.336 { 00:12:23.336 "name": "BaseBdev4", 00:12:23.336 "uuid": "6e369241-80f6-522f-af25-71480ca24f1c", 00:12:23.336 "is_configured": true, 00:12:23.336 "data_offset": 0, 00:12:23.336 "data_size": 65536 00:12:23.336 } 00:12:23.336 ] 00:12:23.336 }' 00:12:23.336 05:01:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.336 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:23.336 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.336 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:23.336 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:23.336 05:01:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.336 05:01:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.336 [2024-12-14 05:01:34.055171] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:23.336 [2024-12-14 05:01:34.065765] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:23.336 [2024-12-14 05:01:34.172987] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:23.336 [2024-12-14 05:01:34.189379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.336 [2024-12-14 05:01:34.189472] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:23.336 [2024-12-14 05:01:34.189500] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:23.336 [2024-12-14 05:01:34.207150] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:23.596 05:01:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.596 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:23.596 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.596 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.596 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.596 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.596 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.596 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.596 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.596 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.596 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.596 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.596 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.596 05:01:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.596 05:01:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.596 05:01:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.596 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.596 "name": "raid_bdev1", 00:12:23.596 "uuid": "36cb8af7-f076-4cfd-ad41-529eccb03745", 00:12:23.596 "strip_size_kb": 0, 00:12:23.596 "state": "online", 00:12:23.597 "raid_level": "raid1", 00:12:23.597 "superblock": false, 00:12:23.597 "num_base_bdevs": 4, 00:12:23.597 "num_base_bdevs_discovered": 3, 00:12:23.597 "num_base_bdevs_operational": 3, 00:12:23.597 "base_bdevs_list": [ 00:12:23.597 { 00:12:23.597 "name": null, 00:12:23.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.597 "is_configured": false, 00:12:23.597 "data_offset": 0, 00:12:23.597 "data_size": 65536 00:12:23.597 }, 00:12:23.597 { 00:12:23.597 "name": "BaseBdev2", 00:12:23.597 "uuid": "205a678f-edcf-552d-ba71-d67542901965", 00:12:23.597 "is_configured": true, 00:12:23.597 "data_offset": 0, 00:12:23.597 "data_size": 65536 00:12:23.597 }, 00:12:23.597 { 00:12:23.597 "name": "BaseBdev3", 00:12:23.597 "uuid": "df963158-f670-5a74-a165-e8a929ddc642", 00:12:23.597 "is_configured": true, 00:12:23.597 "data_offset": 0, 00:12:23.597 "data_size": 65536 00:12:23.597 }, 00:12:23.597 { 00:12:23.597 "name": "BaseBdev4", 00:12:23.597 "uuid": "6e369241-80f6-522f-af25-71480ca24f1c", 00:12:23.597 "is_configured": true, 00:12:23.597 "data_offset": 0, 00:12:23.597 "data_size": 65536 00:12:23.597 } 00:12:23.597 ] 00:12:23.597 }' 00:12:23.597 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.597 05:01:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.857 186.00 IOPS, 558.00 MiB/s [2024-12-14T05:01:34.740Z] 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:23.857 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.857 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:23.857 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:23.857 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.857 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.857 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.857 05:01:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.857 05:01:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.857 05:01:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.857 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.857 "name": "raid_bdev1", 00:12:23.857 "uuid": "36cb8af7-f076-4cfd-ad41-529eccb03745", 00:12:23.857 "strip_size_kb": 0, 00:12:23.857 "state": "online", 00:12:23.857 "raid_level": "raid1", 00:12:23.857 "superblock": false, 00:12:23.857 "num_base_bdevs": 4, 00:12:23.857 "num_base_bdevs_discovered": 3, 00:12:23.857 "num_base_bdevs_operational": 3, 00:12:23.857 "base_bdevs_list": [ 00:12:23.857 { 00:12:23.857 "name": null, 00:12:23.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.857 "is_configured": false, 00:12:23.857 "data_offset": 0, 00:12:23.857 "data_size": 65536 00:12:23.857 }, 00:12:23.857 { 00:12:23.857 "name": "BaseBdev2", 00:12:23.857 "uuid": "205a678f-edcf-552d-ba71-d67542901965", 00:12:23.857 "is_configured": true, 00:12:23.857 "data_offset": 0, 00:12:23.857 "data_size": 65536 00:12:23.857 }, 00:12:23.857 { 00:12:23.857 "name": "BaseBdev3", 00:12:23.857 "uuid": "df963158-f670-5a74-a165-e8a929ddc642", 00:12:23.857 "is_configured": true, 00:12:23.857 "data_offset": 0, 00:12:23.857 "data_size": 65536 00:12:23.857 }, 00:12:23.857 { 00:12:23.857 "name": "BaseBdev4", 00:12:23.857 "uuid": "6e369241-80f6-522f-af25-71480ca24f1c", 00:12:23.857 "is_configured": true, 00:12:23.857 "data_offset": 0, 00:12:23.857 "data_size": 65536 00:12:23.857 } 00:12:23.857 ] 00:12:23.857 }' 00:12:23.857 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.117 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:24.117 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.117 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:24.117 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:24.117 05:01:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.117 05:01:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.117 [2024-12-14 05:01:34.843791] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:24.117 05:01:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.118 05:01:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:24.118 [2024-12-14 05:01:34.874965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:24.118 [2024-12-14 05:01:34.876984] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:24.459 [2024-12-14 05:01:34.998578] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:24.459 [2024-12-14 05:01:34.998911] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:24.459 [2024-12-14 05:01:35.201448] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:24.459 [2024-12-14 05:01:35.201903] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:24.988 164.67 IOPS, 494.00 MiB/s [2024-12-14T05:01:35.871Z] [2024-12-14 05:01:35.657234] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:24.988 05:01:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:24.988 05:01:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.988 05:01:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:24.988 05:01:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:24.988 05:01:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.248 05:01:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.248 05:01:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.248 05:01:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.248 05:01:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.248 [2024-12-14 05:01:35.898079] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:25.248 05:01:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.248 05:01:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.248 "name": "raid_bdev1", 00:12:25.248 "uuid": "36cb8af7-f076-4cfd-ad41-529eccb03745", 00:12:25.248 "strip_size_kb": 0, 00:12:25.248 "state": "online", 00:12:25.248 "raid_level": "raid1", 00:12:25.248 "superblock": false, 00:12:25.248 "num_base_bdevs": 4, 00:12:25.248 "num_base_bdevs_discovered": 4, 00:12:25.248 "num_base_bdevs_operational": 4, 00:12:25.248 "process": { 00:12:25.248 "type": "rebuild", 00:12:25.248 "target": "spare", 00:12:25.248 "progress": { 00:12:25.248 "blocks": 12288, 00:12:25.248 "percent": 18 00:12:25.248 } 00:12:25.248 }, 00:12:25.248 "base_bdevs_list": [ 00:12:25.248 { 00:12:25.248 "name": "spare", 00:12:25.248 "uuid": "13d0df30-2c80-5e05-9f27-640e27b4cd64", 00:12:25.248 "is_configured": true, 00:12:25.248 "data_offset": 0, 00:12:25.248 "data_size": 65536 00:12:25.248 }, 00:12:25.248 { 00:12:25.248 "name": "BaseBdev2", 00:12:25.248 "uuid": "205a678f-edcf-552d-ba71-d67542901965", 00:12:25.248 "is_configured": true, 00:12:25.248 "data_offset": 0, 00:12:25.248 "data_size": 65536 00:12:25.248 }, 00:12:25.248 { 00:12:25.248 "name": "BaseBdev3", 00:12:25.248 "uuid": "df963158-f670-5a74-a165-e8a929ddc642", 00:12:25.248 "is_configured": true, 00:12:25.248 "data_offset": 0, 00:12:25.248 "data_size": 65536 00:12:25.248 }, 00:12:25.248 { 00:12:25.248 "name": "BaseBdev4", 00:12:25.248 "uuid": "6e369241-80f6-522f-af25-71480ca24f1c", 00:12:25.248 "is_configured": true, 00:12:25.248 "data_offset": 0, 00:12:25.248 "data_size": 65536 00:12:25.248 } 00:12:25.248 ] 00:12:25.248 }' 00:12:25.248 05:01:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.248 05:01:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.248 05:01:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.248 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.248 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:25.248 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:25.248 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:25.248 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:25.248 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:25.248 05:01:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.248 05:01:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.248 [2024-12-14 05:01:36.023408] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:25.509 [2024-12-14 05:01:36.140481] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:25.509 [2024-12-14 05:01:36.254373] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:12:25.509 [2024-12-14 05:01:36.254469] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:12:25.509 05:01:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.509 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:25.509 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:25.509 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.509 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.509 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.509 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.509 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.509 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.509 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.509 05:01:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.509 05:01:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.509 05:01:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.509 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.509 "name": "raid_bdev1", 00:12:25.509 "uuid": "36cb8af7-f076-4cfd-ad41-529eccb03745", 00:12:25.509 "strip_size_kb": 0, 00:12:25.509 "state": "online", 00:12:25.509 "raid_level": "raid1", 00:12:25.509 "superblock": false, 00:12:25.509 "num_base_bdevs": 4, 00:12:25.509 "num_base_bdevs_discovered": 3, 00:12:25.509 "num_base_bdevs_operational": 3, 00:12:25.509 "process": { 00:12:25.509 "type": "rebuild", 00:12:25.509 "target": "spare", 00:12:25.509 "progress": { 00:12:25.509 "blocks": 16384, 00:12:25.509 "percent": 25 00:12:25.509 } 00:12:25.509 }, 00:12:25.509 "base_bdevs_list": [ 00:12:25.509 { 00:12:25.509 "name": "spare", 00:12:25.509 "uuid": "13d0df30-2c80-5e05-9f27-640e27b4cd64", 00:12:25.509 "is_configured": true, 00:12:25.509 "data_offset": 0, 00:12:25.509 "data_size": 65536 00:12:25.509 }, 00:12:25.509 { 00:12:25.509 "name": null, 00:12:25.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.509 "is_configured": false, 00:12:25.509 "data_offset": 0, 00:12:25.509 "data_size": 65536 00:12:25.509 }, 00:12:25.509 { 00:12:25.509 "name": "BaseBdev3", 00:12:25.509 "uuid": "df963158-f670-5a74-a165-e8a929ddc642", 00:12:25.509 "is_configured": true, 00:12:25.509 "data_offset": 0, 00:12:25.509 "data_size": 65536 00:12:25.509 }, 00:12:25.509 { 00:12:25.509 "name": "BaseBdev4", 00:12:25.509 "uuid": "6e369241-80f6-522f-af25-71480ca24f1c", 00:12:25.509 "is_configured": true, 00:12:25.509 "data_offset": 0, 00:12:25.509 "data_size": 65536 00:12:25.509 } 00:12:25.509 ] 00:12:25.509 }' 00:12:25.509 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.509 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.509 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.769 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.769 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=387 00:12:25.769 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:25.769 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.769 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.769 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.769 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.769 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.769 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.769 05:01:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.769 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.769 05:01:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.769 05:01:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.769 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.769 "name": "raid_bdev1", 00:12:25.769 "uuid": "36cb8af7-f076-4cfd-ad41-529eccb03745", 00:12:25.769 "strip_size_kb": 0, 00:12:25.769 "state": "online", 00:12:25.769 "raid_level": "raid1", 00:12:25.769 "superblock": false, 00:12:25.769 "num_base_bdevs": 4, 00:12:25.769 "num_base_bdevs_discovered": 3, 00:12:25.769 "num_base_bdevs_operational": 3, 00:12:25.769 "process": { 00:12:25.769 "type": "rebuild", 00:12:25.769 "target": "spare", 00:12:25.769 "progress": { 00:12:25.769 "blocks": 18432, 00:12:25.769 "percent": 28 00:12:25.769 } 00:12:25.769 }, 00:12:25.770 "base_bdevs_list": [ 00:12:25.770 { 00:12:25.770 "name": "spare", 00:12:25.770 "uuid": "13d0df30-2c80-5e05-9f27-640e27b4cd64", 00:12:25.770 "is_configured": true, 00:12:25.770 "data_offset": 0, 00:12:25.770 "data_size": 65536 00:12:25.770 }, 00:12:25.770 { 00:12:25.770 "name": null, 00:12:25.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.770 "is_configured": false, 00:12:25.770 "data_offset": 0, 00:12:25.770 "data_size": 65536 00:12:25.770 }, 00:12:25.770 { 00:12:25.770 "name": "BaseBdev3", 00:12:25.770 "uuid": "df963158-f670-5a74-a165-e8a929ddc642", 00:12:25.770 "is_configured": true, 00:12:25.770 "data_offset": 0, 00:12:25.770 "data_size": 65536 00:12:25.770 }, 00:12:25.770 { 00:12:25.770 "name": "BaseBdev4", 00:12:25.770 "uuid": "6e369241-80f6-522f-af25-71480ca24f1c", 00:12:25.770 "is_configured": true, 00:12:25.770 "data_offset": 0, 00:12:25.770 "data_size": 65536 00:12:25.770 } 00:12:25.770 ] 00:12:25.770 }' 00:12:25.770 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.770 144.00 IOPS, 432.00 MiB/s [2024-12-14T05:01:36.653Z] [2024-12-14 05:01:36.503372] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:25.770 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.770 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.770 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.770 05:01:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:26.030 [2024-12-14 05:01:36.729638] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:26.600 [2024-12-14 05:01:37.406675] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:26.600 [2024-12-14 05:01:37.407131] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:26.860 125.60 IOPS, 376.80 MiB/s [2024-12-14T05:01:37.743Z] 05:01:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:26.860 05:01:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.860 05:01:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.860 05:01:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.860 05:01:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.860 05:01:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.860 05:01:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.860 05:01:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.860 05:01:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.861 05:01:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.861 05:01:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.861 05:01:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.861 "name": "raid_bdev1", 00:12:26.861 "uuid": "36cb8af7-f076-4cfd-ad41-529eccb03745", 00:12:26.861 "strip_size_kb": 0, 00:12:26.861 "state": "online", 00:12:26.861 "raid_level": "raid1", 00:12:26.861 "superblock": false, 00:12:26.861 "num_base_bdevs": 4, 00:12:26.861 "num_base_bdevs_discovered": 3, 00:12:26.861 "num_base_bdevs_operational": 3, 00:12:26.861 "process": { 00:12:26.861 "type": "rebuild", 00:12:26.861 "target": "spare", 00:12:26.861 "progress": { 00:12:26.861 "blocks": 32768, 00:12:26.861 "percent": 50 00:12:26.861 } 00:12:26.861 }, 00:12:26.861 "base_bdevs_list": [ 00:12:26.861 { 00:12:26.861 "name": "spare", 00:12:26.861 "uuid": "13d0df30-2c80-5e05-9f27-640e27b4cd64", 00:12:26.861 "is_configured": true, 00:12:26.861 "data_offset": 0, 00:12:26.861 "data_size": 65536 00:12:26.861 }, 00:12:26.861 { 00:12:26.861 "name": null, 00:12:26.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.861 "is_configured": false, 00:12:26.861 "data_offset": 0, 00:12:26.861 "data_size": 65536 00:12:26.861 }, 00:12:26.861 { 00:12:26.861 "name": "BaseBdev3", 00:12:26.861 "uuid": "df963158-f670-5a74-a165-e8a929ddc642", 00:12:26.861 "is_configured": true, 00:12:26.861 "data_offset": 0, 00:12:26.861 "data_size": 65536 00:12:26.861 }, 00:12:26.861 { 00:12:26.861 "name": "BaseBdev4", 00:12:26.861 "uuid": "6e369241-80f6-522f-af25-71480ca24f1c", 00:12:26.861 "is_configured": true, 00:12:26.861 "data_offset": 0, 00:12:26.861 "data_size": 65536 00:12:26.861 } 00:12:26.861 ] 00:12:26.861 }' 00:12:26.861 05:01:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.861 [2024-12-14 05:01:37.622307] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:26.861 05:01:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:26.861 05:01:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.861 05:01:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.861 05:01:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:27.431 [2024-12-14 05:01:38.295654] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:27.691 112.00 IOPS, 336.00 MiB/s [2024-12-14T05:01:38.574Z] [2024-12-14 05:01:38.503111] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:27.691 [2024-12-14 05:01:38.503351] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:27.952 05:01:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:27.952 05:01:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.952 05:01:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.952 05:01:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.952 05:01:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.952 05:01:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.952 05:01:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.952 05:01:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.952 05:01:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.952 05:01:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.952 [2024-12-14 05:01:38.739722] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:27.952 05:01:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.952 05:01:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.952 "name": "raid_bdev1", 00:12:27.952 "uuid": "36cb8af7-f076-4cfd-ad41-529eccb03745", 00:12:27.952 "strip_size_kb": 0, 00:12:27.952 "state": "online", 00:12:27.952 "raid_level": "raid1", 00:12:27.952 "superblock": false, 00:12:27.952 "num_base_bdevs": 4, 00:12:27.952 "num_base_bdevs_discovered": 3, 00:12:27.952 "num_base_bdevs_operational": 3, 00:12:27.952 "process": { 00:12:27.952 "type": "rebuild", 00:12:27.952 "target": "spare", 00:12:27.952 "progress": { 00:12:27.952 "blocks": 49152, 00:12:27.952 "percent": 75 00:12:27.952 } 00:12:27.952 }, 00:12:27.952 "base_bdevs_list": [ 00:12:27.952 { 00:12:27.952 "name": "spare", 00:12:27.952 "uuid": "13d0df30-2c80-5e05-9f27-640e27b4cd64", 00:12:27.952 "is_configured": true, 00:12:27.952 "data_offset": 0, 00:12:27.952 "data_size": 65536 00:12:27.952 }, 00:12:27.952 { 00:12:27.952 "name": null, 00:12:27.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.952 "is_configured": false, 00:12:27.952 "data_offset": 0, 00:12:27.952 "data_size": 65536 00:12:27.952 }, 00:12:27.952 { 00:12:27.952 "name": "BaseBdev3", 00:12:27.952 "uuid": "df963158-f670-5a74-a165-e8a929ddc642", 00:12:27.952 "is_configured": true, 00:12:27.952 "data_offset": 0, 00:12:27.952 "data_size": 65536 00:12:27.952 }, 00:12:27.952 { 00:12:27.952 "name": "BaseBdev4", 00:12:27.952 "uuid": "6e369241-80f6-522f-af25-71480ca24f1c", 00:12:27.952 "is_configured": true, 00:12:27.952 "data_offset": 0, 00:12:27.952 "data_size": 65536 00:12:27.952 } 00:12:27.952 ] 00:12:27.952 }' 00:12:27.952 05:01:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.952 05:01:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.952 05:01:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.212 05:01:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:28.212 05:01:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:28.212 [2024-12-14 05:01:38.958773] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:28.782 [2024-12-14 05:01:39.386546] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:29.042 100.00 IOPS, 300.00 MiB/s [2024-12-14T05:01:39.925Z] [2024-12-14 05:01:39.813051] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:29.042 05:01:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:29.042 05:01:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.042 05:01:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.042 05:01:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.042 05:01:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.042 05:01:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.042 05:01:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.042 05:01:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.042 05:01:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.042 05:01:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.042 05:01:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.042 [2024-12-14 05:01:39.912849] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:29.042 [2024-12-14 05:01:39.915086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.042 05:01:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.042 "name": "raid_bdev1", 00:12:29.042 "uuid": "36cb8af7-f076-4cfd-ad41-529eccb03745", 00:12:29.042 "strip_size_kb": 0, 00:12:29.042 "state": "online", 00:12:29.042 "raid_level": "raid1", 00:12:29.042 "superblock": false, 00:12:29.042 "num_base_bdevs": 4, 00:12:29.042 "num_base_bdevs_discovered": 3, 00:12:29.042 "num_base_bdevs_operational": 3, 00:12:29.042 "process": { 00:12:29.042 "type": "rebuild", 00:12:29.042 "target": "spare", 00:12:29.042 "progress": { 00:12:29.042 "blocks": 65536, 00:12:29.042 "percent": 100 00:12:29.042 } 00:12:29.042 }, 00:12:29.042 "base_bdevs_list": [ 00:12:29.042 { 00:12:29.042 "name": "spare", 00:12:29.042 "uuid": "13d0df30-2c80-5e05-9f27-640e27b4cd64", 00:12:29.042 "is_configured": true, 00:12:29.042 "data_offset": 0, 00:12:29.042 "data_size": 65536 00:12:29.042 }, 00:12:29.042 { 00:12:29.042 "name": null, 00:12:29.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.042 "is_configured": false, 00:12:29.042 "data_offset": 0, 00:12:29.042 "data_size": 65536 00:12:29.042 }, 00:12:29.042 { 00:12:29.042 "name": "BaseBdev3", 00:12:29.042 "uuid": "df963158-f670-5a74-a165-e8a929ddc642", 00:12:29.042 "is_configured": true, 00:12:29.042 "data_offset": 0, 00:12:29.042 "data_size": 65536 00:12:29.042 }, 00:12:29.042 { 00:12:29.042 "name": "BaseBdev4", 00:12:29.042 "uuid": "6e369241-80f6-522f-af25-71480ca24f1c", 00:12:29.042 "is_configured": true, 00:12:29.042 "data_offset": 0, 00:12:29.042 "data_size": 65536 00:12:29.043 } 00:12:29.043 ] 00:12:29.043 }' 00:12:29.302 05:01:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.302 05:01:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.302 05:01:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.302 05:01:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.302 05:01:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:30.442 91.38 IOPS, 274.12 MiB/s [2024-12-14T05:01:41.325Z] 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.442 "name": "raid_bdev1", 00:12:30.442 "uuid": "36cb8af7-f076-4cfd-ad41-529eccb03745", 00:12:30.442 "strip_size_kb": 0, 00:12:30.442 "state": "online", 00:12:30.442 "raid_level": "raid1", 00:12:30.442 "superblock": false, 00:12:30.442 "num_base_bdevs": 4, 00:12:30.442 "num_base_bdevs_discovered": 3, 00:12:30.442 "num_base_bdevs_operational": 3, 00:12:30.442 "base_bdevs_list": [ 00:12:30.442 { 00:12:30.442 "name": "spare", 00:12:30.442 "uuid": "13d0df30-2c80-5e05-9f27-640e27b4cd64", 00:12:30.442 "is_configured": true, 00:12:30.442 "data_offset": 0, 00:12:30.442 "data_size": 65536 00:12:30.442 }, 00:12:30.442 { 00:12:30.442 "name": null, 00:12:30.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.442 "is_configured": false, 00:12:30.442 "data_offset": 0, 00:12:30.442 "data_size": 65536 00:12:30.442 }, 00:12:30.442 { 00:12:30.442 "name": "BaseBdev3", 00:12:30.442 "uuid": "df963158-f670-5a74-a165-e8a929ddc642", 00:12:30.442 "is_configured": true, 00:12:30.442 "data_offset": 0, 00:12:30.442 "data_size": 65536 00:12:30.442 }, 00:12:30.442 { 00:12:30.442 "name": "BaseBdev4", 00:12:30.442 "uuid": "6e369241-80f6-522f-af25-71480ca24f1c", 00:12:30.442 "is_configured": true, 00:12:30.442 "data_offset": 0, 00:12:30.442 "data_size": 65536 00:12:30.442 } 00:12:30.442 ] 00:12:30.442 }' 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.442 "name": "raid_bdev1", 00:12:30.442 "uuid": "36cb8af7-f076-4cfd-ad41-529eccb03745", 00:12:30.442 "strip_size_kb": 0, 00:12:30.442 "state": "online", 00:12:30.442 "raid_level": "raid1", 00:12:30.442 "superblock": false, 00:12:30.442 "num_base_bdevs": 4, 00:12:30.442 "num_base_bdevs_discovered": 3, 00:12:30.442 "num_base_bdevs_operational": 3, 00:12:30.442 "base_bdevs_list": [ 00:12:30.442 { 00:12:30.442 "name": "spare", 00:12:30.442 "uuid": "13d0df30-2c80-5e05-9f27-640e27b4cd64", 00:12:30.442 "is_configured": true, 00:12:30.442 "data_offset": 0, 00:12:30.442 "data_size": 65536 00:12:30.442 }, 00:12:30.442 { 00:12:30.442 "name": null, 00:12:30.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.442 "is_configured": false, 00:12:30.442 "data_offset": 0, 00:12:30.442 "data_size": 65536 00:12:30.442 }, 00:12:30.442 { 00:12:30.442 "name": "BaseBdev3", 00:12:30.442 "uuid": "df963158-f670-5a74-a165-e8a929ddc642", 00:12:30.442 "is_configured": true, 00:12:30.442 "data_offset": 0, 00:12:30.442 "data_size": 65536 00:12:30.442 }, 00:12:30.442 { 00:12:30.442 "name": "BaseBdev4", 00:12:30.442 "uuid": "6e369241-80f6-522f-af25-71480ca24f1c", 00:12:30.442 "is_configured": true, 00:12:30.442 "data_offset": 0, 00:12:30.442 "data_size": 65536 00:12:30.442 } 00:12:30.442 ] 00:12:30.442 }' 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.442 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.443 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.443 05:01:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.443 05:01:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.443 05:01:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.703 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.703 "name": "raid_bdev1", 00:12:30.703 "uuid": "36cb8af7-f076-4cfd-ad41-529eccb03745", 00:12:30.703 "strip_size_kb": 0, 00:12:30.703 "state": "online", 00:12:30.703 "raid_level": "raid1", 00:12:30.703 "superblock": false, 00:12:30.703 "num_base_bdevs": 4, 00:12:30.703 "num_base_bdevs_discovered": 3, 00:12:30.703 "num_base_bdevs_operational": 3, 00:12:30.703 "base_bdevs_list": [ 00:12:30.703 { 00:12:30.703 "name": "spare", 00:12:30.703 "uuid": "13d0df30-2c80-5e05-9f27-640e27b4cd64", 00:12:30.703 "is_configured": true, 00:12:30.703 "data_offset": 0, 00:12:30.703 "data_size": 65536 00:12:30.703 }, 00:12:30.703 { 00:12:30.703 "name": null, 00:12:30.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.703 "is_configured": false, 00:12:30.703 "data_offset": 0, 00:12:30.703 "data_size": 65536 00:12:30.703 }, 00:12:30.703 { 00:12:30.703 "name": "BaseBdev3", 00:12:30.703 "uuid": "df963158-f670-5a74-a165-e8a929ddc642", 00:12:30.703 "is_configured": true, 00:12:30.703 "data_offset": 0, 00:12:30.703 "data_size": 65536 00:12:30.703 }, 00:12:30.703 { 00:12:30.703 "name": "BaseBdev4", 00:12:30.703 "uuid": "6e369241-80f6-522f-af25-71480ca24f1c", 00:12:30.703 "is_configured": true, 00:12:30.703 "data_offset": 0, 00:12:30.703 "data_size": 65536 00:12:30.703 } 00:12:30.703 ] 00:12:30.703 }' 00:12:30.703 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.703 05:01:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.963 85.67 IOPS, 257.00 MiB/s [2024-12-14T05:01:41.846Z] 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:30.963 05:01:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.963 05:01:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.963 [2024-12-14 05:01:41.737117] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.963 [2024-12-14 05:01:41.737230] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.963 00:12:30.963 Latency(us) 00:12:30.963 [2024-12-14T05:01:41.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.963 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:30.963 raid_bdev1 : 9.38 83.78 251.34 0.00 0.00 16810.66 270.09 114473.36 00:12:30.963 [2024-12-14T05:01:41.846Z] =================================================================================================================== 00:12:30.963 [2024-12-14T05:01:41.846Z] Total : 83.78 251.34 0.00 0.00 16810.66 270.09 114473.36 00:12:30.963 [2024-12-14 05:01:41.836056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.963 [2024-12-14 05:01:41.836129] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.963 [2024-12-14 05:01:41.836245] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.963 [2024-12-14 05:01:41.836294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:30.963 { 00:12:30.963 "results": [ 00:12:30.963 { 00:12:30.963 "job": "raid_bdev1", 00:12:30.963 "core_mask": "0x1", 00:12:30.963 "workload": "randrw", 00:12:30.963 "percentage": 50, 00:12:30.963 "status": "finished", 00:12:30.963 "queue_depth": 2, 00:12:30.963 "io_size": 3145728, 00:12:30.963 "runtime": 9.381899, 00:12:30.963 "iops": 83.778348072176, 00:12:30.963 "mibps": 251.335044216528, 00:12:30.963 "io_failed": 0, 00:12:30.963 "io_timeout": 0, 00:12:30.963 "avg_latency_us": 16810.662684311697, 00:12:30.963 "min_latency_us": 270.0855895196507, 00:12:30.963 "max_latency_us": 114473.36244541485 00:12:30.963 } 00:12:30.963 ], 00:12:30.963 "core_count": 1 00:12:30.963 } 00:12:30.963 05:01:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.963 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.963 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:31.224 05:01:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.224 05:01:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.224 05:01:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.224 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:31.224 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:31.224 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:31.224 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:31.224 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:31.224 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:31.224 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:31.224 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:31.224 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:31.224 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:31.224 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:31.224 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:31.224 05:01:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:31.224 /dev/nbd0 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.484 1+0 records in 00:12:31.484 1+0 records out 00:12:31.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357367 s, 11.5 MB/s 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:31.484 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:31.484 /dev/nbd1 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.744 1+0 records in 00:12:31.744 1+0 records out 00:12:31.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000658113 s, 6.2 MB/s 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.744 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:32.004 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:32.004 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:32.004 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:32.004 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.004 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.004 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:32.004 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:32.004 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.004 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:32.004 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:32.004 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:32.004 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:32.004 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:32.004 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:32.004 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:32.004 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:32.004 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:32.004 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:32.004 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:32.004 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:32.004 /dev/nbd1 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.265 1+0 records in 00:12:32.265 1+0 records out 00:12:32.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047924 s, 8.5 MB/s 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.265 05:01:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:32.525 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:32.525 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:32.525 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:32.525 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.525 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.525 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:32.525 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:32.525 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.525 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:32.525 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:32.525 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:32.525 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:32.525 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:32.525 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.525 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:32.786 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:32.786 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:32.786 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:32.786 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.786 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.786 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:32.786 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:32.786 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.786 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:32.786 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89332 00:12:32.786 05:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 89332 ']' 00:12:32.786 05:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 89332 00:12:32.786 05:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:12:32.786 05:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:32.786 05:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89332 00:12:32.786 05:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:32.786 05:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:32.786 05:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89332' 00:12:32.786 killing process with pid 89332 00:12:32.786 05:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 89332 00:12:32.786 Received shutdown signal, test time was about 11.055812 seconds 00:12:32.786 00:12:32.786 Latency(us) 00:12:32.786 [2024-12-14T05:01:43.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.786 [2024-12-14T05:01:43.669Z] =================================================================================================================== 00:12:32.786 [2024-12-14T05:01:43.669Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:32.786 [2024-12-14 05:01:43.503608] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:32.786 05:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 89332 00:12:32.786 [2024-12-14 05:01:43.549040] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:33.046 00:12:33.046 real 0m13.106s 00:12:33.046 user 0m16.824s 00:12:33.046 sys 0m1.873s 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:33.046 ************************************ 00:12:33.046 END TEST raid_rebuild_test_io 00:12:33.046 ************************************ 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.046 05:01:43 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:12:33.046 05:01:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:33.046 05:01:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:33.046 05:01:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:33.046 ************************************ 00:12:33.046 START TEST raid_rebuild_test_sb_io 00:12:33.046 ************************************ 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:33.046 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:33.047 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89743 00:12:33.047 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89743 00:12:33.047 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:33.047 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 89743 ']' 00:12:33.047 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.047 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:33.047 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.047 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:33.047 05:01:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.306 [2024-12-14 05:01:43.972373] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:33.306 [2024-12-14 05:01:43.972590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89743 ] 00:12:33.306 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:33.306 Zero copy mechanism will not be used. 00:12:33.306 [2024-12-14 05:01:44.144527] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.565 [2024-12-14 05:01:44.192917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.565 [2024-12-14 05:01:44.236161] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.565 [2024-12-14 05:01:44.236307] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.136 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:34.136 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:12:34.136 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:34.136 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:34.136 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.136 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.136 BaseBdev1_malloc 00:12:34.136 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.136 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:34.136 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.136 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.136 [2024-12-14 05:01:44.798784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:34.136 [2024-12-14 05:01:44.798887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.136 [2024-12-14 05:01:44.798947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:34.136 [2024-12-14 05:01:44.798980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.136 [2024-12-14 05:01:44.801105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.136 [2024-12-14 05:01:44.801183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:34.136 BaseBdev1 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.137 BaseBdev2_malloc 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.137 [2024-12-14 05:01:44.841095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:34.137 [2024-12-14 05:01:44.841372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.137 [2024-12-14 05:01:44.841487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:34.137 [2024-12-14 05:01:44.841525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.137 [2024-12-14 05:01:44.845781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.137 [2024-12-14 05:01:44.845845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:34.137 BaseBdev2 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.137 BaseBdev3_malloc 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.137 [2024-12-14 05:01:44.871872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:34.137 [2024-12-14 05:01:44.871985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.137 [2024-12-14 05:01:44.872024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:34.137 [2024-12-14 05:01:44.872054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.137 [2024-12-14 05:01:44.874103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.137 [2024-12-14 05:01:44.874179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:34.137 BaseBdev3 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.137 BaseBdev4_malloc 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.137 [2024-12-14 05:01:44.900472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:34.137 [2024-12-14 05:01:44.900563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.137 [2024-12-14 05:01:44.900608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:34.137 [2024-12-14 05:01:44.900637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.137 [2024-12-14 05:01:44.902655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.137 [2024-12-14 05:01:44.902719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:34.137 BaseBdev4 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.137 spare_malloc 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.137 spare_delay 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.137 [2024-12-14 05:01:44.941152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:34.137 [2024-12-14 05:01:44.941250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.137 [2024-12-14 05:01:44.941306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:34.137 [2024-12-14 05:01:44.941334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.137 [2024-12-14 05:01:44.943378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.137 [2024-12-14 05:01:44.943442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:34.137 spare 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.137 [2024-12-14 05:01:44.953251] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:34.137 [2024-12-14 05:01:44.955061] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.137 [2024-12-14 05:01:44.955176] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:34.137 [2024-12-14 05:01:44.955251] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:34.137 [2024-12-14 05:01:44.955474] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:34.137 [2024-12-14 05:01:44.955520] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:34.137 [2024-12-14 05:01:44.955767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:34.137 [2024-12-14 05:01:44.955955] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:34.137 [2024-12-14 05:01:44.956000] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:34.137 [2024-12-14 05:01:44.956144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.137 05:01:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.137 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.137 "name": "raid_bdev1", 00:12:34.137 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:34.137 "strip_size_kb": 0, 00:12:34.137 "state": "online", 00:12:34.137 "raid_level": "raid1", 00:12:34.137 "superblock": true, 00:12:34.137 "num_base_bdevs": 4, 00:12:34.137 "num_base_bdevs_discovered": 4, 00:12:34.137 "num_base_bdevs_operational": 4, 00:12:34.137 "base_bdevs_list": [ 00:12:34.137 { 00:12:34.137 "name": "BaseBdev1", 00:12:34.137 "uuid": "36ebed80-7067-57aa-b469-d87aa6216d4b", 00:12:34.137 "is_configured": true, 00:12:34.137 "data_offset": 2048, 00:12:34.137 "data_size": 63488 00:12:34.137 }, 00:12:34.137 { 00:12:34.137 "name": "BaseBdev2", 00:12:34.137 "uuid": "aa65fc04-33bc-5c7c-8507-054a5e8490c3", 00:12:34.137 "is_configured": true, 00:12:34.137 "data_offset": 2048, 00:12:34.137 "data_size": 63488 00:12:34.137 }, 00:12:34.137 { 00:12:34.137 "name": "BaseBdev3", 00:12:34.137 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:34.137 "is_configured": true, 00:12:34.137 "data_offset": 2048, 00:12:34.137 "data_size": 63488 00:12:34.137 }, 00:12:34.138 { 00:12:34.138 "name": "BaseBdev4", 00:12:34.138 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:34.138 "is_configured": true, 00:12:34.138 "data_offset": 2048, 00:12:34.138 "data_size": 63488 00:12:34.138 } 00:12:34.138 ] 00:12:34.138 }' 00:12:34.138 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.138 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.705 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:34.705 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.705 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.705 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:34.705 [2024-12-14 05:01:45.416667] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.705 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.705 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:34.705 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.705 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.705 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.705 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:34.705 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.705 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:34.705 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:34.705 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:34.705 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:34.705 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.705 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.705 [2024-12-14 05:01:45.512213] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:34.705 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.705 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:34.706 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.706 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.706 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.706 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.706 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:34.706 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.706 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.706 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.706 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.706 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.706 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.706 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.706 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.706 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.706 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.706 "name": "raid_bdev1", 00:12:34.706 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:34.706 "strip_size_kb": 0, 00:12:34.706 "state": "online", 00:12:34.706 "raid_level": "raid1", 00:12:34.706 "superblock": true, 00:12:34.706 "num_base_bdevs": 4, 00:12:34.706 "num_base_bdevs_discovered": 3, 00:12:34.706 "num_base_bdevs_operational": 3, 00:12:34.706 "base_bdevs_list": [ 00:12:34.706 { 00:12:34.706 "name": null, 00:12:34.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.706 "is_configured": false, 00:12:34.706 "data_offset": 0, 00:12:34.706 "data_size": 63488 00:12:34.706 }, 00:12:34.706 { 00:12:34.706 "name": "BaseBdev2", 00:12:34.706 "uuid": "aa65fc04-33bc-5c7c-8507-054a5e8490c3", 00:12:34.706 "is_configured": true, 00:12:34.706 "data_offset": 2048, 00:12:34.706 "data_size": 63488 00:12:34.706 }, 00:12:34.706 { 00:12:34.706 "name": "BaseBdev3", 00:12:34.706 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:34.706 "is_configured": true, 00:12:34.706 "data_offset": 2048, 00:12:34.706 "data_size": 63488 00:12:34.706 }, 00:12:34.706 { 00:12:34.706 "name": "BaseBdev4", 00:12:34.706 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:34.706 "is_configured": true, 00:12:34.706 "data_offset": 2048, 00:12:34.706 "data_size": 63488 00:12:34.706 } 00:12:34.706 ] 00:12:34.706 }' 00:12:34.706 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.706 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.965 [2024-12-14 05:01:45.602105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:34.965 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:34.965 Zero copy mechanism will not be used. 00:12:34.965 Running I/O for 60 seconds... 00:12:35.224 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:35.224 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.224 05:01:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.224 [2024-12-14 05:01:45.967109] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:35.224 05:01:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.224 05:01:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:35.224 [2024-12-14 05:01:46.028425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:35.224 [2024-12-14 05:01:46.030462] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:35.484 [2024-12-14 05:01:46.156580] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:35.484 [2024-12-14 05:01:46.157809] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:35.743 [2024-12-14 05:01:46.373279] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:35.743 [2024-12-14 05:01:46.373711] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:36.002 191.00 IOPS, 573.00 MiB/s [2024-12-14T05:01:46.885Z] [2024-12-14 05:01:46.732133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:36.002 [2024-12-14 05:01:46.732866] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:36.262 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.262 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.262 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.262 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.262 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.262 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.262 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.262 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.262 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.262 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.262 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.262 "name": "raid_bdev1", 00:12:36.262 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:36.262 "strip_size_kb": 0, 00:12:36.262 "state": "online", 00:12:36.262 "raid_level": "raid1", 00:12:36.262 "superblock": true, 00:12:36.262 "num_base_bdevs": 4, 00:12:36.262 "num_base_bdevs_discovered": 4, 00:12:36.262 "num_base_bdevs_operational": 4, 00:12:36.262 "process": { 00:12:36.262 "type": "rebuild", 00:12:36.262 "target": "spare", 00:12:36.262 "progress": { 00:12:36.262 "blocks": 12288, 00:12:36.262 "percent": 19 00:12:36.262 } 00:12:36.262 }, 00:12:36.262 "base_bdevs_list": [ 00:12:36.262 { 00:12:36.262 "name": "spare", 00:12:36.262 "uuid": "5ae75a82-f9cb-5fb9-b223-7fa7d403f67b", 00:12:36.262 "is_configured": true, 00:12:36.262 "data_offset": 2048, 00:12:36.262 "data_size": 63488 00:12:36.262 }, 00:12:36.262 { 00:12:36.262 "name": "BaseBdev2", 00:12:36.262 "uuid": "aa65fc04-33bc-5c7c-8507-054a5e8490c3", 00:12:36.262 "is_configured": true, 00:12:36.262 "data_offset": 2048, 00:12:36.262 "data_size": 63488 00:12:36.262 }, 00:12:36.262 { 00:12:36.262 "name": "BaseBdev3", 00:12:36.262 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:36.262 "is_configured": true, 00:12:36.262 "data_offset": 2048, 00:12:36.262 "data_size": 63488 00:12:36.262 }, 00:12:36.262 { 00:12:36.262 "name": "BaseBdev4", 00:12:36.262 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:36.262 "is_configured": true, 00:12:36.262 "data_offset": 2048, 00:12:36.262 "data_size": 63488 00:12:36.262 } 00:12:36.262 ] 00:12:36.262 }' 00:12:36.262 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.262 [2024-12-14 05:01:47.104281] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:36.262 [2024-12-14 05:01:47.105535] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:36.262 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:36.262 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.523 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.523 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:36.523 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.523 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.523 [2024-12-14 05:01:47.172323] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:36.523 [2024-12-14 05:01:47.332227] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:36.523 [2024-12-14 05:01:47.348752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.523 [2024-12-14 05:01:47.348810] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:36.523 [2024-12-14 05:01:47.348823] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:36.523 [2024-12-14 05:01:47.359591] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:36.523 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.523 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:36.523 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.523 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.523 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.523 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.523 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:36.523 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.523 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.523 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.523 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.523 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.523 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.523 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.523 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.783 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.783 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.783 "name": "raid_bdev1", 00:12:36.783 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:36.783 "strip_size_kb": 0, 00:12:36.783 "state": "online", 00:12:36.783 "raid_level": "raid1", 00:12:36.783 "superblock": true, 00:12:36.783 "num_base_bdevs": 4, 00:12:36.783 "num_base_bdevs_discovered": 3, 00:12:36.783 "num_base_bdevs_operational": 3, 00:12:36.783 "base_bdevs_list": [ 00:12:36.783 { 00:12:36.783 "name": null, 00:12:36.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.783 "is_configured": false, 00:12:36.783 "data_offset": 0, 00:12:36.783 "data_size": 63488 00:12:36.783 }, 00:12:36.783 { 00:12:36.783 "name": "BaseBdev2", 00:12:36.783 "uuid": "aa65fc04-33bc-5c7c-8507-054a5e8490c3", 00:12:36.783 "is_configured": true, 00:12:36.783 "data_offset": 2048, 00:12:36.783 "data_size": 63488 00:12:36.783 }, 00:12:36.783 { 00:12:36.783 "name": "BaseBdev3", 00:12:36.783 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:36.783 "is_configured": true, 00:12:36.783 "data_offset": 2048, 00:12:36.783 "data_size": 63488 00:12:36.783 }, 00:12:36.783 { 00:12:36.783 "name": "BaseBdev4", 00:12:36.783 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:36.783 "is_configured": true, 00:12:36.783 "data_offset": 2048, 00:12:36.783 "data_size": 63488 00:12:36.783 } 00:12:36.783 ] 00:12:36.783 }' 00:12:36.783 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.783 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.043 149.50 IOPS, 448.50 MiB/s [2024-12-14T05:01:47.926Z] 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:37.043 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.043 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:37.043 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:37.043 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.043 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.043 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.043 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.043 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.043 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.043 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.043 "name": "raid_bdev1", 00:12:37.043 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:37.043 "strip_size_kb": 0, 00:12:37.043 "state": "online", 00:12:37.043 "raid_level": "raid1", 00:12:37.043 "superblock": true, 00:12:37.043 "num_base_bdevs": 4, 00:12:37.043 "num_base_bdevs_discovered": 3, 00:12:37.043 "num_base_bdevs_operational": 3, 00:12:37.043 "base_bdevs_list": [ 00:12:37.043 { 00:12:37.043 "name": null, 00:12:37.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.043 "is_configured": false, 00:12:37.043 "data_offset": 0, 00:12:37.043 "data_size": 63488 00:12:37.043 }, 00:12:37.043 { 00:12:37.043 "name": "BaseBdev2", 00:12:37.044 "uuid": "aa65fc04-33bc-5c7c-8507-054a5e8490c3", 00:12:37.044 "is_configured": true, 00:12:37.044 "data_offset": 2048, 00:12:37.044 "data_size": 63488 00:12:37.044 }, 00:12:37.044 { 00:12:37.044 "name": "BaseBdev3", 00:12:37.044 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:37.044 "is_configured": true, 00:12:37.044 "data_offset": 2048, 00:12:37.044 "data_size": 63488 00:12:37.044 }, 00:12:37.044 { 00:12:37.044 "name": "BaseBdev4", 00:12:37.044 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:37.044 "is_configured": true, 00:12:37.044 "data_offset": 2048, 00:12:37.044 "data_size": 63488 00:12:37.044 } 00:12:37.044 ] 00:12:37.044 }' 00:12:37.044 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.304 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:37.304 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.304 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:37.304 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:37.304 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.304 05:01:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.304 [2024-12-14 05:01:48.007562] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:37.304 05:01:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.304 05:01:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:37.304 [2024-12-14 05:01:48.047864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:37.304 [2024-12-14 05:01:48.049851] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:37.304 [2024-12-14 05:01:48.164398] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:37.304 [2024-12-14 05:01:48.165043] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:37.563 [2024-12-14 05:01:48.392240] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:37.563 [2024-12-14 05:01:48.392843] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:38.083 150.67 IOPS, 452.00 MiB/s [2024-12-14T05:01:48.966Z] [2024-12-14 05:01:48.835222] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.344 "name": "raid_bdev1", 00:12:38.344 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:38.344 "strip_size_kb": 0, 00:12:38.344 "state": "online", 00:12:38.344 "raid_level": "raid1", 00:12:38.344 "superblock": true, 00:12:38.344 "num_base_bdevs": 4, 00:12:38.344 "num_base_bdevs_discovered": 4, 00:12:38.344 "num_base_bdevs_operational": 4, 00:12:38.344 "process": { 00:12:38.344 "type": "rebuild", 00:12:38.344 "target": "spare", 00:12:38.344 "progress": { 00:12:38.344 "blocks": 10240, 00:12:38.344 "percent": 16 00:12:38.344 } 00:12:38.344 }, 00:12:38.344 "base_bdevs_list": [ 00:12:38.344 { 00:12:38.344 "name": "spare", 00:12:38.344 "uuid": "5ae75a82-f9cb-5fb9-b223-7fa7d403f67b", 00:12:38.344 "is_configured": true, 00:12:38.344 "data_offset": 2048, 00:12:38.344 "data_size": 63488 00:12:38.344 }, 00:12:38.344 { 00:12:38.344 "name": "BaseBdev2", 00:12:38.344 "uuid": "aa65fc04-33bc-5c7c-8507-054a5e8490c3", 00:12:38.344 "is_configured": true, 00:12:38.344 "data_offset": 2048, 00:12:38.344 "data_size": 63488 00:12:38.344 }, 00:12:38.344 { 00:12:38.344 "name": "BaseBdev3", 00:12:38.344 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:38.344 "is_configured": true, 00:12:38.344 "data_offset": 2048, 00:12:38.344 "data_size": 63488 00:12:38.344 }, 00:12:38.344 { 00:12:38.344 "name": "BaseBdev4", 00:12:38.344 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:38.344 "is_configured": true, 00:12:38.344 "data_offset": 2048, 00:12:38.344 "data_size": 63488 00:12:38.344 } 00:12:38.344 ] 00:12:38.344 }' 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.344 [2024-12-14 05:01:49.166771] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:38.344 [2024-12-14 05:01:49.167255] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:38.344 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.344 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.344 [2024-12-14 05:01:49.199625] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:38.604 [2024-12-14 05:01:49.371222] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:38.864 [2024-12-14 05:01:49.578500] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:12:38.864 [2024-12-14 05:01:49.578599] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.864 124.50 IOPS, 373.50 MiB/s [2024-12-14T05:01:49.747Z] 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.864 "name": "raid_bdev1", 00:12:38.864 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:38.864 "strip_size_kb": 0, 00:12:38.864 "state": "online", 00:12:38.864 "raid_level": "raid1", 00:12:38.864 "superblock": true, 00:12:38.864 "num_base_bdevs": 4, 00:12:38.864 "num_base_bdevs_discovered": 3, 00:12:38.864 "num_base_bdevs_operational": 3, 00:12:38.864 "process": { 00:12:38.864 "type": "rebuild", 00:12:38.864 "target": "spare", 00:12:38.864 "progress": { 00:12:38.864 "blocks": 16384, 00:12:38.864 "percent": 25 00:12:38.864 } 00:12:38.864 }, 00:12:38.864 "base_bdevs_list": [ 00:12:38.864 { 00:12:38.864 "name": "spare", 00:12:38.864 "uuid": "5ae75a82-f9cb-5fb9-b223-7fa7d403f67b", 00:12:38.864 "is_configured": true, 00:12:38.864 "data_offset": 2048, 00:12:38.864 "data_size": 63488 00:12:38.864 }, 00:12:38.864 { 00:12:38.864 "name": null, 00:12:38.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.864 "is_configured": false, 00:12:38.864 "data_offset": 0, 00:12:38.864 "data_size": 63488 00:12:38.864 }, 00:12:38.864 { 00:12:38.864 "name": "BaseBdev3", 00:12:38.864 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:38.864 "is_configured": true, 00:12:38.864 "data_offset": 2048, 00:12:38.864 "data_size": 63488 00:12:38.864 }, 00:12:38.864 { 00:12:38.864 "name": "BaseBdev4", 00:12:38.864 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:38.864 "is_configured": true, 00:12:38.864 "data_offset": 2048, 00:12:38.864 "data_size": 63488 00:12:38.864 } 00:12:38.864 ] 00:12:38.864 }' 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=400 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.864 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.124 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.124 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.124 "name": "raid_bdev1", 00:12:39.124 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:39.124 "strip_size_kb": 0, 00:12:39.124 "state": "online", 00:12:39.124 "raid_level": "raid1", 00:12:39.124 "superblock": true, 00:12:39.124 "num_base_bdevs": 4, 00:12:39.124 "num_base_bdevs_discovered": 3, 00:12:39.124 "num_base_bdevs_operational": 3, 00:12:39.124 "process": { 00:12:39.124 "type": "rebuild", 00:12:39.124 "target": "spare", 00:12:39.124 "progress": { 00:12:39.124 "blocks": 18432, 00:12:39.124 "percent": 29 00:12:39.124 } 00:12:39.124 }, 00:12:39.124 "base_bdevs_list": [ 00:12:39.124 { 00:12:39.124 "name": "spare", 00:12:39.124 "uuid": "5ae75a82-f9cb-5fb9-b223-7fa7d403f67b", 00:12:39.124 "is_configured": true, 00:12:39.124 "data_offset": 2048, 00:12:39.124 "data_size": 63488 00:12:39.124 }, 00:12:39.124 { 00:12:39.124 "name": null, 00:12:39.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.124 "is_configured": false, 00:12:39.124 "data_offset": 0, 00:12:39.124 "data_size": 63488 00:12:39.124 }, 00:12:39.124 { 00:12:39.124 "name": "BaseBdev3", 00:12:39.124 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:39.124 "is_configured": true, 00:12:39.124 "data_offset": 2048, 00:12:39.124 "data_size": 63488 00:12:39.124 }, 00:12:39.124 { 00:12:39.124 "name": "BaseBdev4", 00:12:39.124 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:39.124 "is_configured": true, 00:12:39.124 "data_offset": 2048, 00:12:39.124 "data_size": 63488 00:12:39.124 } 00:12:39.124 ] 00:12:39.124 }' 00:12:39.124 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.124 [2024-12-14 05:01:49.808538] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:39.124 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:39.124 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.124 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:39.124 05:01:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:40.064 111.60 IOPS, 334.80 MiB/s [2024-12-14T05:01:50.947Z] 05:01:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:40.064 05:01:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.064 05:01:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.064 05:01:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.064 05:01:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.064 05:01:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.064 05:01:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.064 05:01:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.064 05:01:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.064 05:01:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.064 05:01:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.064 05:01:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.064 "name": "raid_bdev1", 00:12:40.064 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:40.064 "strip_size_kb": 0, 00:12:40.064 "state": "online", 00:12:40.064 "raid_level": "raid1", 00:12:40.064 "superblock": true, 00:12:40.064 "num_base_bdevs": 4, 00:12:40.064 "num_base_bdevs_discovered": 3, 00:12:40.064 "num_base_bdevs_operational": 3, 00:12:40.064 "process": { 00:12:40.064 "type": "rebuild", 00:12:40.064 "target": "spare", 00:12:40.064 "progress": { 00:12:40.064 "blocks": 40960, 00:12:40.064 "percent": 64 00:12:40.064 } 00:12:40.064 }, 00:12:40.064 "base_bdevs_list": [ 00:12:40.064 { 00:12:40.064 "name": "spare", 00:12:40.064 "uuid": "5ae75a82-f9cb-5fb9-b223-7fa7d403f67b", 00:12:40.064 "is_configured": true, 00:12:40.064 "data_offset": 2048, 00:12:40.064 "data_size": 63488 00:12:40.064 }, 00:12:40.064 { 00:12:40.064 "name": null, 00:12:40.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.064 "is_configured": false, 00:12:40.064 "data_offset": 0, 00:12:40.064 "data_size": 63488 00:12:40.064 }, 00:12:40.064 { 00:12:40.064 "name": "BaseBdev3", 00:12:40.064 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:40.064 "is_configured": true, 00:12:40.064 "data_offset": 2048, 00:12:40.064 "data_size": 63488 00:12:40.064 }, 00:12:40.064 { 00:12:40.064 "name": "BaseBdev4", 00:12:40.064 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:40.064 "is_configured": true, 00:12:40.064 "data_offset": 2048, 00:12:40.064 "data_size": 63488 00:12:40.064 } 00:12:40.064 ] 00:12:40.064 }' 00:12:40.064 05:01:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.324 05:01:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.324 05:01:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.324 05:01:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.324 05:01:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:40.582 [2024-12-14 05:01:51.218827] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:41.411 99.33 IOPS, 298.00 MiB/s [2024-12-14T05:01:52.294Z] 05:01:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:41.411 05:01:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:41.411 05:01:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.411 05:01:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:41.411 05:01:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:41.411 05:01:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.411 05:01:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.411 05:01:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.411 05:01:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.411 05:01:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.411 05:01:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.411 05:01:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.411 "name": "raid_bdev1", 00:12:41.411 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:41.411 "strip_size_kb": 0, 00:12:41.411 "state": "online", 00:12:41.411 "raid_level": "raid1", 00:12:41.411 "superblock": true, 00:12:41.411 "num_base_bdevs": 4, 00:12:41.411 "num_base_bdevs_discovered": 3, 00:12:41.411 "num_base_bdevs_operational": 3, 00:12:41.411 "process": { 00:12:41.411 "type": "rebuild", 00:12:41.411 "target": "spare", 00:12:41.411 "progress": { 00:12:41.411 "blocks": 59392, 00:12:41.411 "percent": 93 00:12:41.411 } 00:12:41.411 }, 00:12:41.411 "base_bdevs_list": [ 00:12:41.411 { 00:12:41.411 "name": "spare", 00:12:41.411 "uuid": "5ae75a82-f9cb-5fb9-b223-7fa7d403f67b", 00:12:41.411 "is_configured": true, 00:12:41.411 "data_offset": 2048, 00:12:41.411 "data_size": 63488 00:12:41.411 }, 00:12:41.411 { 00:12:41.411 "name": null, 00:12:41.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.411 "is_configured": false, 00:12:41.411 "data_offset": 0, 00:12:41.411 "data_size": 63488 00:12:41.411 }, 00:12:41.411 { 00:12:41.411 "name": "BaseBdev3", 00:12:41.411 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:41.411 "is_configured": true, 00:12:41.411 "data_offset": 2048, 00:12:41.411 "data_size": 63488 00:12:41.411 }, 00:12:41.411 { 00:12:41.411 "name": "BaseBdev4", 00:12:41.411 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:41.411 "is_configured": true, 00:12:41.411 "data_offset": 2048, 00:12:41.411 "data_size": 63488 00:12:41.411 } 00:12:41.411 ] 00:12:41.411 }' 00:12:41.411 05:01:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.411 05:01:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:41.411 05:01:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.411 [2024-12-14 05:01:52.187803] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:41.411 05:01:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:41.411 05:01:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:41.671 [2024-12-14 05:01:52.292982] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:41.671 [2024-12-14 05:01:52.296187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.501 90.43 IOPS, 271.29 MiB/s [2024-12-14T05:01:53.384Z] 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.501 "name": "raid_bdev1", 00:12:42.501 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:42.501 "strip_size_kb": 0, 00:12:42.501 "state": "online", 00:12:42.501 "raid_level": "raid1", 00:12:42.501 "superblock": true, 00:12:42.501 "num_base_bdevs": 4, 00:12:42.501 "num_base_bdevs_discovered": 3, 00:12:42.501 "num_base_bdevs_operational": 3, 00:12:42.501 "base_bdevs_list": [ 00:12:42.501 { 00:12:42.501 "name": "spare", 00:12:42.501 "uuid": "5ae75a82-f9cb-5fb9-b223-7fa7d403f67b", 00:12:42.501 "is_configured": true, 00:12:42.501 "data_offset": 2048, 00:12:42.501 "data_size": 63488 00:12:42.501 }, 00:12:42.501 { 00:12:42.501 "name": null, 00:12:42.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.501 "is_configured": false, 00:12:42.501 "data_offset": 0, 00:12:42.501 "data_size": 63488 00:12:42.501 }, 00:12:42.501 { 00:12:42.501 "name": "BaseBdev3", 00:12:42.501 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:42.501 "is_configured": true, 00:12:42.501 "data_offset": 2048, 00:12:42.501 "data_size": 63488 00:12:42.501 }, 00:12:42.501 { 00:12:42.501 "name": "BaseBdev4", 00:12:42.501 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:42.501 "is_configured": true, 00:12:42.501 "data_offset": 2048, 00:12:42.501 "data_size": 63488 00:12:42.501 } 00:12:42.501 ] 00:12:42.501 }' 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.501 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.501 "name": "raid_bdev1", 00:12:42.501 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:42.501 "strip_size_kb": 0, 00:12:42.501 "state": "online", 00:12:42.501 "raid_level": "raid1", 00:12:42.501 "superblock": true, 00:12:42.501 "num_base_bdevs": 4, 00:12:42.501 "num_base_bdevs_discovered": 3, 00:12:42.501 "num_base_bdevs_operational": 3, 00:12:42.501 "base_bdevs_list": [ 00:12:42.501 { 00:12:42.501 "name": "spare", 00:12:42.501 "uuid": "5ae75a82-f9cb-5fb9-b223-7fa7d403f67b", 00:12:42.501 "is_configured": true, 00:12:42.501 "data_offset": 2048, 00:12:42.501 "data_size": 63488 00:12:42.501 }, 00:12:42.501 { 00:12:42.501 "name": null, 00:12:42.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.502 "is_configured": false, 00:12:42.502 "data_offset": 0, 00:12:42.502 "data_size": 63488 00:12:42.502 }, 00:12:42.502 { 00:12:42.502 "name": "BaseBdev3", 00:12:42.502 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:42.502 "is_configured": true, 00:12:42.502 "data_offset": 2048, 00:12:42.502 "data_size": 63488 00:12:42.502 }, 00:12:42.502 { 00:12:42.502 "name": "BaseBdev4", 00:12:42.502 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:42.502 "is_configured": true, 00:12:42.502 "data_offset": 2048, 00:12:42.502 "data_size": 63488 00:12:42.502 } 00:12:42.502 ] 00:12:42.502 }' 00:12:42.502 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.762 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:42.762 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.762 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:42.762 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:42.762 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.762 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.762 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.762 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.762 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.762 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.762 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.762 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.762 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.762 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.762 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.762 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.762 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.762 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.762 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.762 "name": "raid_bdev1", 00:12:42.762 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:42.762 "strip_size_kb": 0, 00:12:42.762 "state": "online", 00:12:42.762 "raid_level": "raid1", 00:12:42.762 "superblock": true, 00:12:42.762 "num_base_bdevs": 4, 00:12:42.762 "num_base_bdevs_discovered": 3, 00:12:42.762 "num_base_bdevs_operational": 3, 00:12:42.762 "base_bdevs_list": [ 00:12:42.762 { 00:12:42.762 "name": "spare", 00:12:42.762 "uuid": "5ae75a82-f9cb-5fb9-b223-7fa7d403f67b", 00:12:42.762 "is_configured": true, 00:12:42.762 "data_offset": 2048, 00:12:42.762 "data_size": 63488 00:12:42.762 }, 00:12:42.762 { 00:12:42.762 "name": null, 00:12:42.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.762 "is_configured": false, 00:12:42.762 "data_offset": 0, 00:12:42.762 "data_size": 63488 00:12:42.762 }, 00:12:42.762 { 00:12:42.762 "name": "BaseBdev3", 00:12:42.762 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:42.762 "is_configured": true, 00:12:42.762 "data_offset": 2048, 00:12:42.762 "data_size": 63488 00:12:42.762 }, 00:12:42.762 { 00:12:42.762 "name": "BaseBdev4", 00:12:42.762 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:42.762 "is_configured": true, 00:12:42.762 "data_offset": 2048, 00:12:42.762 "data_size": 63488 00:12:42.762 } 00:12:42.762 ] 00:12:42.762 }' 00:12:42.762 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.762 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.332 84.88 IOPS, 254.62 MiB/s [2024-12-14T05:01:54.215Z] 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:43.332 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.332 05:01:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.332 [2024-12-14 05:01:53.991402] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:43.332 [2024-12-14 05:01:53.991477] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:43.332 00:12:43.332 Latency(us) 00:12:43.332 [2024-12-14T05:01:54.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.332 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:43.332 raid_bdev1 : 8.42 82.43 247.29 0.00 0.00 17647.73 277.24 110352.32 00:12:43.332 [2024-12-14T05:01:54.215Z] =================================================================================================================== 00:12:43.332 [2024-12-14T05:01:54.215Z] Total : 82.43 247.29 0.00 0.00 17647.73 277.24 110352.32 00:12:43.332 [2024-12-14 05:01:54.010369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.332 [2024-12-14 05:01:54.010448] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:43.332 [2024-12-14 05:01:54.010581] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:43.332 [2024-12-14 05:01:54.010631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:43.332 { 00:12:43.332 "results": [ 00:12:43.332 { 00:12:43.332 "job": "raid_bdev1", 00:12:43.332 "core_mask": "0x1", 00:12:43.332 "workload": "randrw", 00:12:43.332 "percentage": 50, 00:12:43.332 "status": "finished", 00:12:43.332 "queue_depth": 2, 00:12:43.332 "io_size": 3145728, 00:12:43.332 "runtime": 8.419217, 00:12:43.332 "iops": 82.43046829651736, 00:12:43.332 "mibps": 247.2914048895521, 00:12:43.332 "io_failed": 0, 00:12:43.332 "io_timeout": 0, 00:12:43.332 "avg_latency_us": 17647.727571322503, 00:12:43.332 "min_latency_us": 277.2401746724891, 00:12:43.332 "max_latency_us": 110352.32139737991 00:12:43.332 } 00:12:43.332 ], 00:12:43.332 "core_count": 1 00:12:43.332 } 00:12:43.332 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.332 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.332 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:43.332 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.332 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.332 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.332 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:43.332 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:43.332 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:43.332 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:43.332 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:43.332 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:43.333 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:43.333 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:43.333 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:43.333 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:43.333 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:43.333 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:43.333 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:43.593 /dev/nbd0 00:12:43.593 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:43.593 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:43.593 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:43.593 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:43.593 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:43.593 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:43.593 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:43.593 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:43.593 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:43.593 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:43.593 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.593 1+0 records in 00:12:43.593 1+0 records out 00:12:43.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454257 s, 9.0 MB/s 00:12:43.594 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.594 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:43.594 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.594 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:43.594 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:43.594 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.594 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:43.594 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:43.594 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:43.594 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:43.594 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:43.594 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:43.594 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:43.594 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:43.594 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:43.594 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:43.594 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:43.594 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:43.594 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:43.594 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:43.594 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:43.594 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:43.854 /dev/nbd1 00:12:43.854 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:43.854 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:43.854 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:43.854 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:43.854 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:43.854 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:43.854 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:43.854 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:43.854 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:43.854 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:43.854 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.854 1+0 records in 00:12:43.854 1+0 records out 00:12:43.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347389 s, 11.8 MB/s 00:12:43.854 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.854 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:43.854 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.854 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:43.854 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:43.854 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.854 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:43.854 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:43.854 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:43.854 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:43.854 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:43.855 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:43.855 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:43.855 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.855 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:44.115 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:44.115 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:44.115 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:44.115 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.115 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.115 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:44.115 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:44.115 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.115 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:44.115 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:44.115 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:44.115 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:44.115 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:44.115 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:44.115 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:44.115 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:44.115 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:44.115 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:44.115 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:44.115 05:01:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:44.375 /dev/nbd1 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.375 1+0 records in 00:12:44.375 1+0 records out 00:12:44.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521862 s, 7.8 MB/s 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.375 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:44.635 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:44.635 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:44.635 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:44.635 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.635 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.635 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:44.635 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:44.635 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.635 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:44.635 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:44.635 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:44.635 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:44.635 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:44.635 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.635 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.895 [2024-12-14 05:01:55.619724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:44.895 [2024-12-14 05:01:55.619832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.895 [2024-12-14 05:01:55.619873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:44.895 [2024-12-14 05:01:55.619902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.895 [2024-12-14 05:01:55.622066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.895 [2024-12-14 05:01:55.622136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:44.895 [2024-12-14 05:01:55.622247] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:44.895 [2024-12-14 05:01:55.622293] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:44.895 [2024-12-14 05:01:55.622409] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:44.895 [2024-12-14 05:01:55.622507] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:44.895 spare 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.895 [2024-12-14 05:01:55.722396] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:12:44.895 [2024-12-14 05:01:55.722457] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:44.895 [2024-12-14 05:01:55.722736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000036fc0 00:12:44.895 [2024-12-14 05:01:55.722900] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:12:44.895 [2024-12-14 05:01:55.722952] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:12:44.895 [2024-12-14 05:01:55.723117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.895 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.155 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.155 "name": "raid_bdev1", 00:12:45.155 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:45.155 "strip_size_kb": 0, 00:12:45.155 "state": "online", 00:12:45.155 "raid_level": "raid1", 00:12:45.155 "superblock": true, 00:12:45.155 "num_base_bdevs": 4, 00:12:45.155 "num_base_bdevs_discovered": 3, 00:12:45.155 "num_base_bdevs_operational": 3, 00:12:45.155 "base_bdevs_list": [ 00:12:45.155 { 00:12:45.155 "name": "spare", 00:12:45.155 "uuid": "5ae75a82-f9cb-5fb9-b223-7fa7d403f67b", 00:12:45.155 "is_configured": true, 00:12:45.155 "data_offset": 2048, 00:12:45.155 "data_size": 63488 00:12:45.155 }, 00:12:45.155 { 00:12:45.155 "name": null, 00:12:45.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.155 "is_configured": false, 00:12:45.155 "data_offset": 2048, 00:12:45.155 "data_size": 63488 00:12:45.155 }, 00:12:45.155 { 00:12:45.155 "name": "BaseBdev3", 00:12:45.155 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:45.155 "is_configured": true, 00:12:45.155 "data_offset": 2048, 00:12:45.155 "data_size": 63488 00:12:45.155 }, 00:12:45.155 { 00:12:45.155 "name": "BaseBdev4", 00:12:45.155 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:45.155 "is_configured": true, 00:12:45.155 "data_offset": 2048, 00:12:45.155 "data_size": 63488 00:12:45.155 } 00:12:45.155 ] 00:12:45.155 }' 00:12:45.155 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.155 05:01:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.415 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:45.415 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.415 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:45.415 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:45.415 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.415 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.415 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.415 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.415 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.415 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.415 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.415 "name": "raid_bdev1", 00:12:45.415 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:45.415 "strip_size_kb": 0, 00:12:45.415 "state": "online", 00:12:45.415 "raid_level": "raid1", 00:12:45.415 "superblock": true, 00:12:45.415 "num_base_bdevs": 4, 00:12:45.415 "num_base_bdevs_discovered": 3, 00:12:45.415 "num_base_bdevs_operational": 3, 00:12:45.415 "base_bdevs_list": [ 00:12:45.415 { 00:12:45.415 "name": "spare", 00:12:45.415 "uuid": "5ae75a82-f9cb-5fb9-b223-7fa7d403f67b", 00:12:45.415 "is_configured": true, 00:12:45.415 "data_offset": 2048, 00:12:45.415 "data_size": 63488 00:12:45.415 }, 00:12:45.415 { 00:12:45.415 "name": null, 00:12:45.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.415 "is_configured": false, 00:12:45.415 "data_offset": 2048, 00:12:45.415 "data_size": 63488 00:12:45.415 }, 00:12:45.415 { 00:12:45.415 "name": "BaseBdev3", 00:12:45.415 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:45.415 "is_configured": true, 00:12:45.415 "data_offset": 2048, 00:12:45.415 "data_size": 63488 00:12:45.415 }, 00:12:45.415 { 00:12:45.415 "name": "BaseBdev4", 00:12:45.415 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:45.415 "is_configured": true, 00:12:45.415 "data_offset": 2048, 00:12:45.415 "data_size": 63488 00:12:45.415 } 00:12:45.415 ] 00:12:45.415 }' 00:12:45.415 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.415 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:45.415 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.415 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:45.415 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.415 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:45.415 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.415 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.675 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.675 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:45.675 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:45.675 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.675 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.675 [2024-12-14 05:01:56.334758] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:45.675 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.675 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:45.675 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.675 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.675 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.675 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.675 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:45.675 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.675 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.675 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.675 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.675 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.675 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.675 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.675 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.675 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.675 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.675 "name": "raid_bdev1", 00:12:45.675 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:45.675 "strip_size_kb": 0, 00:12:45.675 "state": "online", 00:12:45.675 "raid_level": "raid1", 00:12:45.675 "superblock": true, 00:12:45.675 "num_base_bdevs": 4, 00:12:45.675 "num_base_bdevs_discovered": 2, 00:12:45.675 "num_base_bdevs_operational": 2, 00:12:45.675 "base_bdevs_list": [ 00:12:45.675 { 00:12:45.675 "name": null, 00:12:45.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.675 "is_configured": false, 00:12:45.676 "data_offset": 0, 00:12:45.676 "data_size": 63488 00:12:45.676 }, 00:12:45.676 { 00:12:45.676 "name": null, 00:12:45.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.676 "is_configured": false, 00:12:45.676 "data_offset": 2048, 00:12:45.676 "data_size": 63488 00:12:45.676 }, 00:12:45.676 { 00:12:45.676 "name": "BaseBdev3", 00:12:45.676 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:45.676 "is_configured": true, 00:12:45.676 "data_offset": 2048, 00:12:45.676 "data_size": 63488 00:12:45.676 }, 00:12:45.676 { 00:12:45.676 "name": "BaseBdev4", 00:12:45.676 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:45.676 "is_configured": true, 00:12:45.676 "data_offset": 2048, 00:12:45.676 "data_size": 63488 00:12:45.676 } 00:12:45.676 ] 00:12:45.676 }' 00:12:45.676 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.676 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.935 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:45.935 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.935 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.935 [2024-12-14 05:01:56.734126] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:45.935 [2024-12-14 05:01:56.734368] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:45.936 [2024-12-14 05:01:56.734428] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:45.936 [2024-12-14 05:01:56.734551] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:45.936 [2024-12-14 05:01:56.738214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037090 00:12:45.936 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.936 05:01:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:45.936 [2024-12-14 05:01:56.740074] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:46.875 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.875 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.875 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.875 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.875 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.875 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.875 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.875 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.875 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.135 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.135 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.135 "name": "raid_bdev1", 00:12:47.135 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:47.135 "strip_size_kb": 0, 00:12:47.135 "state": "online", 00:12:47.135 "raid_level": "raid1", 00:12:47.135 "superblock": true, 00:12:47.135 "num_base_bdevs": 4, 00:12:47.135 "num_base_bdevs_discovered": 3, 00:12:47.135 "num_base_bdevs_operational": 3, 00:12:47.135 "process": { 00:12:47.135 "type": "rebuild", 00:12:47.135 "target": "spare", 00:12:47.135 "progress": { 00:12:47.135 "blocks": 20480, 00:12:47.135 "percent": 32 00:12:47.135 } 00:12:47.135 }, 00:12:47.135 "base_bdevs_list": [ 00:12:47.135 { 00:12:47.135 "name": "spare", 00:12:47.135 "uuid": "5ae75a82-f9cb-5fb9-b223-7fa7d403f67b", 00:12:47.135 "is_configured": true, 00:12:47.135 "data_offset": 2048, 00:12:47.135 "data_size": 63488 00:12:47.135 }, 00:12:47.135 { 00:12:47.135 "name": null, 00:12:47.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.135 "is_configured": false, 00:12:47.135 "data_offset": 2048, 00:12:47.135 "data_size": 63488 00:12:47.135 }, 00:12:47.135 { 00:12:47.135 "name": "BaseBdev3", 00:12:47.135 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:47.135 "is_configured": true, 00:12:47.135 "data_offset": 2048, 00:12:47.135 "data_size": 63488 00:12:47.135 }, 00:12:47.135 { 00:12:47.135 "name": "BaseBdev4", 00:12:47.135 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:47.135 "is_configured": true, 00:12:47.135 "data_offset": 2048, 00:12:47.135 "data_size": 63488 00:12:47.135 } 00:12:47.135 ] 00:12:47.135 }' 00:12:47.135 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.136 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.136 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.136 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.136 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:47.136 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.136 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.136 [2024-12-14 05:01:57.900993] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:47.136 [2024-12-14 05:01:57.944019] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:47.136 [2024-12-14 05:01:57.944120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.136 [2024-12-14 05:01:57.944164] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:47.136 [2024-12-14 05:01:57.944184] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:47.136 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.136 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:47.136 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.136 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.136 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.136 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.136 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:47.136 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.136 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.136 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.136 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.136 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.136 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.136 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.136 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.136 05:01:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.136 05:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.136 "name": "raid_bdev1", 00:12:47.136 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:47.136 "strip_size_kb": 0, 00:12:47.136 "state": "online", 00:12:47.136 "raid_level": "raid1", 00:12:47.136 "superblock": true, 00:12:47.136 "num_base_bdevs": 4, 00:12:47.136 "num_base_bdevs_discovered": 2, 00:12:47.136 "num_base_bdevs_operational": 2, 00:12:47.136 "base_bdevs_list": [ 00:12:47.136 { 00:12:47.136 "name": null, 00:12:47.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.136 "is_configured": false, 00:12:47.136 "data_offset": 0, 00:12:47.136 "data_size": 63488 00:12:47.136 }, 00:12:47.136 { 00:12:47.136 "name": null, 00:12:47.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.136 "is_configured": false, 00:12:47.136 "data_offset": 2048, 00:12:47.136 "data_size": 63488 00:12:47.136 }, 00:12:47.136 { 00:12:47.136 "name": "BaseBdev3", 00:12:47.136 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:47.136 "is_configured": true, 00:12:47.136 "data_offset": 2048, 00:12:47.136 "data_size": 63488 00:12:47.136 }, 00:12:47.136 { 00:12:47.136 "name": "BaseBdev4", 00:12:47.136 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:47.136 "is_configured": true, 00:12:47.136 "data_offset": 2048, 00:12:47.136 "data_size": 63488 00:12:47.136 } 00:12:47.136 ] 00:12:47.136 }' 00:12:47.136 05:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.136 05:01:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.705 05:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:47.705 05:01:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.705 05:01:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.705 [2024-12-14 05:01:58.431266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:47.706 [2024-12-14 05:01:58.431374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.706 [2024-12-14 05:01:58.431415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:47.706 [2024-12-14 05:01:58.431443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.706 [2024-12-14 05:01:58.431881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.706 [2024-12-14 05:01:58.431940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:47.706 [2024-12-14 05:01:58.432055] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:47.706 [2024-12-14 05:01:58.432094] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:47.706 [2024-12-14 05:01:58.432141] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:47.706 [2024-12-14 05:01:58.432221] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:47.706 [2024-12-14 05:01:58.435617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:12:47.706 spare 00:12:47.706 05:01:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.706 05:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:47.706 [2024-12-14 05:01:58.437482] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:48.644 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.644 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.644 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.644 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.644 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.644 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.644 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.644 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.644 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.644 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.644 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.644 "name": "raid_bdev1", 00:12:48.644 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:48.644 "strip_size_kb": 0, 00:12:48.644 "state": "online", 00:12:48.644 "raid_level": "raid1", 00:12:48.644 "superblock": true, 00:12:48.644 "num_base_bdevs": 4, 00:12:48.644 "num_base_bdevs_discovered": 3, 00:12:48.644 "num_base_bdevs_operational": 3, 00:12:48.644 "process": { 00:12:48.644 "type": "rebuild", 00:12:48.644 "target": "spare", 00:12:48.644 "progress": { 00:12:48.644 "blocks": 20480, 00:12:48.644 "percent": 32 00:12:48.644 } 00:12:48.644 }, 00:12:48.644 "base_bdevs_list": [ 00:12:48.644 { 00:12:48.644 "name": "spare", 00:12:48.644 "uuid": "5ae75a82-f9cb-5fb9-b223-7fa7d403f67b", 00:12:48.644 "is_configured": true, 00:12:48.644 "data_offset": 2048, 00:12:48.644 "data_size": 63488 00:12:48.644 }, 00:12:48.644 { 00:12:48.644 "name": null, 00:12:48.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.644 "is_configured": false, 00:12:48.644 "data_offset": 2048, 00:12:48.644 "data_size": 63488 00:12:48.644 }, 00:12:48.644 { 00:12:48.644 "name": "BaseBdev3", 00:12:48.644 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:48.644 "is_configured": true, 00:12:48.644 "data_offset": 2048, 00:12:48.644 "data_size": 63488 00:12:48.644 }, 00:12:48.644 { 00:12:48.644 "name": "BaseBdev4", 00:12:48.644 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:48.644 "is_configured": true, 00:12:48.644 "data_offset": 2048, 00:12:48.644 "data_size": 63488 00:12:48.644 } 00:12:48.644 ] 00:12:48.644 }' 00:12:48.644 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.904 [2024-12-14 05:01:59.599005] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:48.904 [2024-12-14 05:01:59.641484] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:48.904 [2024-12-14 05:01:59.641587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.904 [2024-12-14 05:01:59.641621] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:48.904 [2024-12-14 05:01:59.641642] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.904 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.904 "name": "raid_bdev1", 00:12:48.904 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:48.904 "strip_size_kb": 0, 00:12:48.904 "state": "online", 00:12:48.904 "raid_level": "raid1", 00:12:48.904 "superblock": true, 00:12:48.904 "num_base_bdevs": 4, 00:12:48.904 "num_base_bdevs_discovered": 2, 00:12:48.904 "num_base_bdevs_operational": 2, 00:12:48.904 "base_bdevs_list": [ 00:12:48.904 { 00:12:48.904 "name": null, 00:12:48.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.904 "is_configured": false, 00:12:48.905 "data_offset": 0, 00:12:48.905 "data_size": 63488 00:12:48.905 }, 00:12:48.905 { 00:12:48.905 "name": null, 00:12:48.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.905 "is_configured": false, 00:12:48.905 "data_offset": 2048, 00:12:48.905 "data_size": 63488 00:12:48.905 }, 00:12:48.905 { 00:12:48.905 "name": "BaseBdev3", 00:12:48.905 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:48.905 "is_configured": true, 00:12:48.905 "data_offset": 2048, 00:12:48.905 "data_size": 63488 00:12:48.905 }, 00:12:48.905 { 00:12:48.905 "name": "BaseBdev4", 00:12:48.905 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:48.905 "is_configured": true, 00:12:48.905 "data_offset": 2048, 00:12:48.905 "data_size": 63488 00:12:48.905 } 00:12:48.905 ] 00:12:48.905 }' 00:12:48.905 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.905 05:01:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.472 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:49.472 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.472 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:49.472 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:49.472 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.472 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.473 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.473 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.473 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.473 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.473 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.473 "name": "raid_bdev1", 00:12:49.473 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:49.473 "strip_size_kb": 0, 00:12:49.473 "state": "online", 00:12:49.473 "raid_level": "raid1", 00:12:49.473 "superblock": true, 00:12:49.473 "num_base_bdevs": 4, 00:12:49.473 "num_base_bdevs_discovered": 2, 00:12:49.473 "num_base_bdevs_operational": 2, 00:12:49.473 "base_bdevs_list": [ 00:12:49.473 { 00:12:49.473 "name": null, 00:12:49.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.473 "is_configured": false, 00:12:49.473 "data_offset": 0, 00:12:49.473 "data_size": 63488 00:12:49.473 }, 00:12:49.473 { 00:12:49.473 "name": null, 00:12:49.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.473 "is_configured": false, 00:12:49.473 "data_offset": 2048, 00:12:49.473 "data_size": 63488 00:12:49.473 }, 00:12:49.473 { 00:12:49.473 "name": "BaseBdev3", 00:12:49.473 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:49.473 "is_configured": true, 00:12:49.473 "data_offset": 2048, 00:12:49.473 "data_size": 63488 00:12:49.473 }, 00:12:49.473 { 00:12:49.473 "name": "BaseBdev4", 00:12:49.473 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:49.473 "is_configured": true, 00:12:49.473 "data_offset": 2048, 00:12:49.473 "data_size": 63488 00:12:49.473 } 00:12:49.473 ] 00:12:49.473 }' 00:12:49.473 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.473 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:49.473 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.473 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:49.473 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:49.473 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.473 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.473 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.473 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:49.473 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.473 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.473 [2024-12-14 05:02:00.280573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:49.473 [2024-12-14 05:02:00.280694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.473 [2024-12-14 05:02:00.280729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:12:49.473 [2024-12-14 05:02:00.280760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.473 [2024-12-14 05:02:00.281188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.473 [2024-12-14 05:02:00.281248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:49.473 [2024-12-14 05:02:00.281324] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:49.473 [2024-12-14 05:02:00.281340] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:49.473 [2024-12-14 05:02:00.281347] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:49.473 [2024-12-14 05:02:00.281358] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:49.473 BaseBdev1 00:12:49.473 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.473 05:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:50.853 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:50.853 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.853 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.853 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.853 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.853 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:50.853 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.853 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.853 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.853 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.853 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.853 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.853 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.853 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.853 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.853 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.853 "name": "raid_bdev1", 00:12:50.853 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:50.853 "strip_size_kb": 0, 00:12:50.853 "state": "online", 00:12:50.853 "raid_level": "raid1", 00:12:50.853 "superblock": true, 00:12:50.853 "num_base_bdevs": 4, 00:12:50.853 "num_base_bdevs_discovered": 2, 00:12:50.853 "num_base_bdevs_operational": 2, 00:12:50.853 "base_bdevs_list": [ 00:12:50.853 { 00:12:50.853 "name": null, 00:12:50.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.853 "is_configured": false, 00:12:50.853 "data_offset": 0, 00:12:50.853 "data_size": 63488 00:12:50.853 }, 00:12:50.853 { 00:12:50.853 "name": null, 00:12:50.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.853 "is_configured": false, 00:12:50.853 "data_offset": 2048, 00:12:50.853 "data_size": 63488 00:12:50.853 }, 00:12:50.853 { 00:12:50.853 "name": "BaseBdev3", 00:12:50.853 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:50.853 "is_configured": true, 00:12:50.853 "data_offset": 2048, 00:12:50.853 "data_size": 63488 00:12:50.853 }, 00:12:50.853 { 00:12:50.853 "name": "BaseBdev4", 00:12:50.853 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:50.853 "is_configured": true, 00:12:50.853 "data_offset": 2048, 00:12:50.853 "data_size": 63488 00:12:50.853 } 00:12:50.853 ] 00:12:50.853 }' 00:12:50.853 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.853 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.113 "name": "raid_bdev1", 00:12:51.113 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:51.113 "strip_size_kb": 0, 00:12:51.113 "state": "online", 00:12:51.113 "raid_level": "raid1", 00:12:51.113 "superblock": true, 00:12:51.113 "num_base_bdevs": 4, 00:12:51.113 "num_base_bdevs_discovered": 2, 00:12:51.113 "num_base_bdevs_operational": 2, 00:12:51.113 "base_bdevs_list": [ 00:12:51.113 { 00:12:51.113 "name": null, 00:12:51.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.113 "is_configured": false, 00:12:51.113 "data_offset": 0, 00:12:51.113 "data_size": 63488 00:12:51.113 }, 00:12:51.113 { 00:12:51.113 "name": null, 00:12:51.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.113 "is_configured": false, 00:12:51.113 "data_offset": 2048, 00:12:51.113 "data_size": 63488 00:12:51.113 }, 00:12:51.113 { 00:12:51.113 "name": "BaseBdev3", 00:12:51.113 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:51.113 "is_configured": true, 00:12:51.113 "data_offset": 2048, 00:12:51.113 "data_size": 63488 00:12:51.113 }, 00:12:51.113 { 00:12:51.113 "name": "BaseBdev4", 00:12:51.113 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:51.113 "is_configured": true, 00:12:51.113 "data_offset": 2048, 00:12:51.113 "data_size": 63488 00:12:51.113 } 00:12:51.113 ] 00:12:51.113 }' 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.113 [2024-12-14 05:02:01.929941] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:51.113 [2024-12-14 05:02:01.930124] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:51.113 [2024-12-14 05:02:01.930189] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:51.113 request: 00:12:51.113 { 00:12:51.113 "base_bdev": "BaseBdev1", 00:12:51.113 "raid_bdev": "raid_bdev1", 00:12:51.113 "method": "bdev_raid_add_base_bdev", 00:12:51.113 "req_id": 1 00:12:51.113 } 00:12:51.113 Got JSON-RPC error response 00:12:51.113 response: 00:12:51.113 { 00:12:51.113 "code": -22, 00:12:51.113 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:51.113 } 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:51.113 05:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:52.493 05:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:52.493 05:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.493 05:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.493 05:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.493 05:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.493 05:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:52.493 05:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.493 05:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.493 05:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.493 05:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.493 05:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.493 05:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.493 05:02:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.493 05:02:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.493 05:02:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.493 05:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.493 "name": "raid_bdev1", 00:12:52.493 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:52.493 "strip_size_kb": 0, 00:12:52.493 "state": "online", 00:12:52.493 "raid_level": "raid1", 00:12:52.493 "superblock": true, 00:12:52.493 "num_base_bdevs": 4, 00:12:52.493 "num_base_bdevs_discovered": 2, 00:12:52.493 "num_base_bdevs_operational": 2, 00:12:52.493 "base_bdevs_list": [ 00:12:52.493 { 00:12:52.493 "name": null, 00:12:52.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.493 "is_configured": false, 00:12:52.494 "data_offset": 0, 00:12:52.494 "data_size": 63488 00:12:52.494 }, 00:12:52.494 { 00:12:52.494 "name": null, 00:12:52.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.494 "is_configured": false, 00:12:52.494 "data_offset": 2048, 00:12:52.494 "data_size": 63488 00:12:52.494 }, 00:12:52.494 { 00:12:52.494 "name": "BaseBdev3", 00:12:52.494 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:52.494 "is_configured": true, 00:12:52.494 "data_offset": 2048, 00:12:52.494 "data_size": 63488 00:12:52.494 }, 00:12:52.494 { 00:12:52.494 "name": "BaseBdev4", 00:12:52.494 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:52.494 "is_configured": true, 00:12:52.494 "data_offset": 2048, 00:12:52.494 "data_size": 63488 00:12:52.494 } 00:12:52.494 ] 00:12:52.494 }' 00:12:52.494 05:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.494 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.754 "name": "raid_bdev1", 00:12:52.754 "uuid": "410dd492-4e6e-4bf0-9abe-b1bd3c9db502", 00:12:52.754 "strip_size_kb": 0, 00:12:52.754 "state": "online", 00:12:52.754 "raid_level": "raid1", 00:12:52.754 "superblock": true, 00:12:52.754 "num_base_bdevs": 4, 00:12:52.754 "num_base_bdevs_discovered": 2, 00:12:52.754 "num_base_bdevs_operational": 2, 00:12:52.754 "base_bdevs_list": [ 00:12:52.754 { 00:12:52.754 "name": null, 00:12:52.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.754 "is_configured": false, 00:12:52.754 "data_offset": 0, 00:12:52.754 "data_size": 63488 00:12:52.754 }, 00:12:52.754 { 00:12:52.754 "name": null, 00:12:52.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.754 "is_configured": false, 00:12:52.754 "data_offset": 2048, 00:12:52.754 "data_size": 63488 00:12:52.754 }, 00:12:52.754 { 00:12:52.754 "name": "BaseBdev3", 00:12:52.754 "uuid": "dea07963-0c09-5749-a518-ed3323560e82", 00:12:52.754 "is_configured": true, 00:12:52.754 "data_offset": 2048, 00:12:52.754 "data_size": 63488 00:12:52.754 }, 00:12:52.754 { 00:12:52.754 "name": "BaseBdev4", 00:12:52.754 "uuid": "92d10ca8-87dc-5a62-b03b-af35b6d4d927", 00:12:52.754 "is_configured": true, 00:12:52.754 "data_offset": 2048, 00:12:52.754 "data_size": 63488 00:12:52.754 } 00:12:52.754 ] 00:12:52.754 }' 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89743 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 89743 ']' 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 89743 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89743 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89743' 00:12:52.754 killing process with pid 89743 00:12:52.754 Received shutdown signal, test time was about 18.033935 seconds 00:12:52.754 00:12:52.754 Latency(us) 00:12:52.754 [2024-12-14T05:02:03.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.754 [2024-12-14T05:02:03.637Z] =================================================================================================================== 00:12:52.754 [2024-12-14T05:02:03.637Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 89743 00:12:52.754 [2024-12-14 05:02:03.603477] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:52.754 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 89743 00:12:52.754 [2024-12-14 05:02:03.603626] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:52.754 [2024-12-14 05:02:03.603697] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:52.754 [2024-12-14 05:02:03.603707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:12:53.014 [2024-12-14 05:02:03.650567] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:53.014 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:53.014 00:12:53.014 real 0m20.026s 00:12:53.014 user 0m26.703s 00:12:53.014 sys 0m2.690s 00:12:53.014 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:53.014 ************************************ 00:12:53.014 END TEST raid_rebuild_test_sb_io 00:12:53.014 ************************************ 00:12:53.014 05:02:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.275 05:02:03 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:12:53.275 05:02:03 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:12:53.275 05:02:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:53.275 05:02:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:53.275 05:02:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:53.275 ************************************ 00:12:53.275 START TEST raid5f_state_function_test 00:12:53.275 ************************************ 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90455 00:12:53.275 Process raid pid: 90455 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90455' 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90455 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 90455 ']' 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:53.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:53.275 05:02:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.275 [2024-12-14 05:02:04.081505] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:53.275 [2024-12-14 05:02:04.081703] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.535 [2024-12-14 05:02:04.250235] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.535 [2024-12-14 05:02:04.298847] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.535 [2024-12-14 05:02:04.341859] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.535 [2024-12-14 05:02:04.341896] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.105 [2024-12-14 05:02:04.891637] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:54.105 [2024-12-14 05:02:04.891748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:54.105 [2024-12-14 05:02:04.891782] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:54.105 [2024-12-14 05:02:04.891805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:54.105 [2024-12-14 05:02:04.891823] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:54.105 [2024-12-14 05:02:04.891848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.105 "name": "Existed_Raid", 00:12:54.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.105 "strip_size_kb": 64, 00:12:54.105 "state": "configuring", 00:12:54.105 "raid_level": "raid5f", 00:12:54.105 "superblock": false, 00:12:54.105 "num_base_bdevs": 3, 00:12:54.105 "num_base_bdevs_discovered": 0, 00:12:54.105 "num_base_bdevs_operational": 3, 00:12:54.105 "base_bdevs_list": [ 00:12:54.105 { 00:12:54.105 "name": "BaseBdev1", 00:12:54.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.105 "is_configured": false, 00:12:54.105 "data_offset": 0, 00:12:54.105 "data_size": 0 00:12:54.105 }, 00:12:54.105 { 00:12:54.105 "name": "BaseBdev2", 00:12:54.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.105 "is_configured": false, 00:12:54.105 "data_offset": 0, 00:12:54.105 "data_size": 0 00:12:54.105 }, 00:12:54.105 { 00:12:54.105 "name": "BaseBdev3", 00:12:54.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.105 "is_configured": false, 00:12:54.105 "data_offset": 0, 00:12:54.105 "data_size": 0 00:12:54.105 } 00:12:54.105 ] 00:12:54.105 }' 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.105 05:02:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.675 [2024-12-14 05:02:05.326814] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:54.675 [2024-12-14 05:02:05.326888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.675 [2024-12-14 05:02:05.338825] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:54.675 [2024-12-14 05:02:05.338898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:54.675 [2024-12-14 05:02:05.338924] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:54.675 [2024-12-14 05:02:05.338945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:54.675 [2024-12-14 05:02:05.338962] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:54.675 [2024-12-14 05:02:05.338981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.675 [2024-12-14 05:02:05.359660] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:54.675 BaseBdev1 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.675 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.675 [ 00:12:54.675 { 00:12:54.676 "name": "BaseBdev1", 00:12:54.676 "aliases": [ 00:12:54.676 "dd315935-13cc-4699-8e93-85bd8720f2d6" 00:12:54.676 ], 00:12:54.676 "product_name": "Malloc disk", 00:12:54.676 "block_size": 512, 00:12:54.676 "num_blocks": 65536, 00:12:54.676 "uuid": "dd315935-13cc-4699-8e93-85bd8720f2d6", 00:12:54.676 "assigned_rate_limits": { 00:12:54.676 "rw_ios_per_sec": 0, 00:12:54.676 "rw_mbytes_per_sec": 0, 00:12:54.676 "r_mbytes_per_sec": 0, 00:12:54.676 "w_mbytes_per_sec": 0 00:12:54.676 }, 00:12:54.676 "claimed": true, 00:12:54.676 "claim_type": "exclusive_write", 00:12:54.676 "zoned": false, 00:12:54.676 "supported_io_types": { 00:12:54.676 "read": true, 00:12:54.676 "write": true, 00:12:54.676 "unmap": true, 00:12:54.676 "flush": true, 00:12:54.676 "reset": true, 00:12:54.676 "nvme_admin": false, 00:12:54.676 "nvme_io": false, 00:12:54.676 "nvme_io_md": false, 00:12:54.676 "write_zeroes": true, 00:12:54.676 "zcopy": true, 00:12:54.676 "get_zone_info": false, 00:12:54.676 "zone_management": false, 00:12:54.676 "zone_append": false, 00:12:54.676 "compare": false, 00:12:54.676 "compare_and_write": false, 00:12:54.676 "abort": true, 00:12:54.676 "seek_hole": false, 00:12:54.676 "seek_data": false, 00:12:54.676 "copy": true, 00:12:54.676 "nvme_iov_md": false 00:12:54.676 }, 00:12:54.676 "memory_domains": [ 00:12:54.676 { 00:12:54.676 "dma_device_id": "system", 00:12:54.676 "dma_device_type": 1 00:12:54.676 }, 00:12:54.676 { 00:12:54.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.676 "dma_device_type": 2 00:12:54.676 } 00:12:54.676 ], 00:12:54.676 "driver_specific": {} 00:12:54.676 } 00:12:54.676 ] 00:12:54.676 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.676 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:54.676 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:54.676 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.676 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.676 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:54.676 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.676 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.676 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.676 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.676 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.676 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.676 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.676 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.676 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.676 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.676 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.676 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.676 "name": "Existed_Raid", 00:12:54.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.676 "strip_size_kb": 64, 00:12:54.676 "state": "configuring", 00:12:54.676 "raid_level": "raid5f", 00:12:54.676 "superblock": false, 00:12:54.676 "num_base_bdevs": 3, 00:12:54.676 "num_base_bdevs_discovered": 1, 00:12:54.676 "num_base_bdevs_operational": 3, 00:12:54.676 "base_bdevs_list": [ 00:12:54.676 { 00:12:54.676 "name": "BaseBdev1", 00:12:54.676 "uuid": "dd315935-13cc-4699-8e93-85bd8720f2d6", 00:12:54.676 "is_configured": true, 00:12:54.676 "data_offset": 0, 00:12:54.676 "data_size": 65536 00:12:54.676 }, 00:12:54.676 { 00:12:54.676 "name": "BaseBdev2", 00:12:54.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.676 "is_configured": false, 00:12:54.676 "data_offset": 0, 00:12:54.676 "data_size": 0 00:12:54.676 }, 00:12:54.676 { 00:12:54.676 "name": "BaseBdev3", 00:12:54.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.676 "is_configured": false, 00:12:54.676 "data_offset": 0, 00:12:54.676 "data_size": 0 00:12:54.676 } 00:12:54.676 ] 00:12:54.676 }' 00:12:54.676 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.676 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.935 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:54.935 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.935 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.935 [2024-12-14 05:02:05.802982] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:54.935 [2024-12-14 05:02:05.803085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:12:54.935 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.935 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:54.935 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.935 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.935 [2024-12-14 05:02:05.815053] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:55.194 [2024-12-14 05:02:05.816952] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:55.194 [2024-12-14 05:02:05.817045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:55.194 [2024-12-14 05:02:05.817074] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:55.194 [2024-12-14 05:02:05.817098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:55.194 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.194 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:55.194 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:55.194 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:55.194 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.194 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.194 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:55.194 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.194 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.194 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.194 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.194 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.194 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.194 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.194 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.194 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.194 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.194 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.194 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.194 "name": "Existed_Raid", 00:12:55.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.194 "strip_size_kb": 64, 00:12:55.194 "state": "configuring", 00:12:55.194 "raid_level": "raid5f", 00:12:55.194 "superblock": false, 00:12:55.194 "num_base_bdevs": 3, 00:12:55.194 "num_base_bdevs_discovered": 1, 00:12:55.194 "num_base_bdevs_operational": 3, 00:12:55.194 "base_bdevs_list": [ 00:12:55.194 { 00:12:55.194 "name": "BaseBdev1", 00:12:55.194 "uuid": "dd315935-13cc-4699-8e93-85bd8720f2d6", 00:12:55.194 "is_configured": true, 00:12:55.194 "data_offset": 0, 00:12:55.194 "data_size": 65536 00:12:55.194 }, 00:12:55.194 { 00:12:55.194 "name": "BaseBdev2", 00:12:55.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.194 "is_configured": false, 00:12:55.194 "data_offset": 0, 00:12:55.194 "data_size": 0 00:12:55.194 }, 00:12:55.194 { 00:12:55.194 "name": "BaseBdev3", 00:12:55.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.194 "is_configured": false, 00:12:55.194 "data_offset": 0, 00:12:55.194 "data_size": 0 00:12:55.194 } 00:12:55.194 ] 00:12:55.194 }' 00:12:55.194 05:02:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.194 05:02:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.454 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:55.454 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.454 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.454 [2024-12-14 05:02:06.318939] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:55.454 BaseBdev2 00:12:55.454 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.454 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:55.454 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:55.454 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:55.454 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:55.454 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:55.454 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:55.454 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:55.454 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.454 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.454 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.454 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:55.454 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.454 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.713 [ 00:12:55.713 { 00:12:55.713 "name": "BaseBdev2", 00:12:55.713 "aliases": [ 00:12:55.713 "9fe8fe73-4f98-4e43-a2c9-1aff88ab6e3f" 00:12:55.713 ], 00:12:55.713 "product_name": "Malloc disk", 00:12:55.713 "block_size": 512, 00:12:55.713 "num_blocks": 65536, 00:12:55.713 "uuid": "9fe8fe73-4f98-4e43-a2c9-1aff88ab6e3f", 00:12:55.713 "assigned_rate_limits": { 00:12:55.713 "rw_ios_per_sec": 0, 00:12:55.713 "rw_mbytes_per_sec": 0, 00:12:55.713 "r_mbytes_per_sec": 0, 00:12:55.714 "w_mbytes_per_sec": 0 00:12:55.714 }, 00:12:55.714 "claimed": true, 00:12:55.714 "claim_type": "exclusive_write", 00:12:55.714 "zoned": false, 00:12:55.714 "supported_io_types": { 00:12:55.714 "read": true, 00:12:55.714 "write": true, 00:12:55.714 "unmap": true, 00:12:55.714 "flush": true, 00:12:55.714 "reset": true, 00:12:55.714 "nvme_admin": false, 00:12:55.714 "nvme_io": false, 00:12:55.714 "nvme_io_md": false, 00:12:55.714 "write_zeroes": true, 00:12:55.714 "zcopy": true, 00:12:55.714 "get_zone_info": false, 00:12:55.714 "zone_management": false, 00:12:55.714 "zone_append": false, 00:12:55.714 "compare": false, 00:12:55.714 "compare_and_write": false, 00:12:55.714 "abort": true, 00:12:55.714 "seek_hole": false, 00:12:55.714 "seek_data": false, 00:12:55.714 "copy": true, 00:12:55.714 "nvme_iov_md": false 00:12:55.714 }, 00:12:55.714 "memory_domains": [ 00:12:55.714 { 00:12:55.714 "dma_device_id": "system", 00:12:55.714 "dma_device_type": 1 00:12:55.714 }, 00:12:55.714 { 00:12:55.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.714 "dma_device_type": 2 00:12:55.714 } 00:12:55.714 ], 00:12:55.714 "driver_specific": {} 00:12:55.714 } 00:12:55.714 ] 00:12:55.714 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.714 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:55.714 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:55.714 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:55.714 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:55.714 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.714 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.714 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:55.714 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.714 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.714 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.714 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.714 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.714 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.714 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.714 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.714 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.714 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.714 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.714 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.714 "name": "Existed_Raid", 00:12:55.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.714 "strip_size_kb": 64, 00:12:55.714 "state": "configuring", 00:12:55.714 "raid_level": "raid5f", 00:12:55.714 "superblock": false, 00:12:55.714 "num_base_bdevs": 3, 00:12:55.714 "num_base_bdevs_discovered": 2, 00:12:55.714 "num_base_bdevs_operational": 3, 00:12:55.714 "base_bdevs_list": [ 00:12:55.714 { 00:12:55.714 "name": "BaseBdev1", 00:12:55.714 "uuid": "dd315935-13cc-4699-8e93-85bd8720f2d6", 00:12:55.714 "is_configured": true, 00:12:55.714 "data_offset": 0, 00:12:55.714 "data_size": 65536 00:12:55.714 }, 00:12:55.714 { 00:12:55.714 "name": "BaseBdev2", 00:12:55.714 "uuid": "9fe8fe73-4f98-4e43-a2c9-1aff88ab6e3f", 00:12:55.714 "is_configured": true, 00:12:55.714 "data_offset": 0, 00:12:55.714 "data_size": 65536 00:12:55.714 }, 00:12:55.714 { 00:12:55.714 "name": "BaseBdev3", 00:12:55.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.714 "is_configured": false, 00:12:55.714 "data_offset": 0, 00:12:55.714 "data_size": 0 00:12:55.714 } 00:12:55.714 ] 00:12:55.714 }' 00:12:55.714 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.714 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.973 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:55.973 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.973 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.973 [2024-12-14 05:02:06.853091] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:55.973 [2024-12-14 05:02:06.853226] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:12:55.973 [2024-12-14 05:02:06.853258] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:55.973 [2024-12-14 05:02:06.853604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:12:55.973 [2024-12-14 05:02:06.854101] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:12:55.973 [2024-12-14 05:02:06.854152] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:12:55.973 [2024-12-14 05:02:06.854414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.256 BaseBdev3 00:12:56.256 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.256 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:56.256 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:56.256 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:56.256 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:56.256 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:56.256 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:56.256 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:56.256 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.256 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.256 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.256 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:56.256 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.256 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.256 [ 00:12:56.256 { 00:12:56.256 "name": "BaseBdev3", 00:12:56.256 "aliases": [ 00:12:56.256 "bc401860-635a-4d90-9b0e-131955259c96" 00:12:56.256 ], 00:12:56.256 "product_name": "Malloc disk", 00:12:56.256 "block_size": 512, 00:12:56.256 "num_blocks": 65536, 00:12:56.256 "uuid": "bc401860-635a-4d90-9b0e-131955259c96", 00:12:56.256 "assigned_rate_limits": { 00:12:56.256 "rw_ios_per_sec": 0, 00:12:56.256 "rw_mbytes_per_sec": 0, 00:12:56.256 "r_mbytes_per_sec": 0, 00:12:56.256 "w_mbytes_per_sec": 0 00:12:56.256 }, 00:12:56.256 "claimed": true, 00:12:56.256 "claim_type": "exclusive_write", 00:12:56.256 "zoned": false, 00:12:56.256 "supported_io_types": { 00:12:56.256 "read": true, 00:12:56.256 "write": true, 00:12:56.256 "unmap": true, 00:12:56.256 "flush": true, 00:12:56.256 "reset": true, 00:12:56.256 "nvme_admin": false, 00:12:56.256 "nvme_io": false, 00:12:56.256 "nvme_io_md": false, 00:12:56.256 "write_zeroes": true, 00:12:56.256 "zcopy": true, 00:12:56.256 "get_zone_info": false, 00:12:56.256 "zone_management": false, 00:12:56.256 "zone_append": false, 00:12:56.256 "compare": false, 00:12:56.256 "compare_and_write": false, 00:12:56.256 "abort": true, 00:12:56.256 "seek_hole": false, 00:12:56.256 "seek_data": false, 00:12:56.256 "copy": true, 00:12:56.256 "nvme_iov_md": false 00:12:56.256 }, 00:12:56.256 "memory_domains": [ 00:12:56.256 { 00:12:56.256 "dma_device_id": "system", 00:12:56.256 "dma_device_type": 1 00:12:56.257 }, 00:12:56.257 { 00:12:56.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.257 "dma_device_type": 2 00:12:56.257 } 00:12:56.257 ], 00:12:56.257 "driver_specific": {} 00:12:56.257 } 00:12:56.257 ] 00:12:56.257 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.257 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:56.257 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:56.257 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:56.257 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:56.257 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.257 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.257 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:56.257 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.257 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.257 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.257 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.257 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.257 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.257 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.257 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.257 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.257 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.257 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.257 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.257 "name": "Existed_Raid", 00:12:56.257 "uuid": "88f4c993-a809-45fa-aab2-1d26102cbfb3", 00:12:56.257 "strip_size_kb": 64, 00:12:56.257 "state": "online", 00:12:56.257 "raid_level": "raid5f", 00:12:56.257 "superblock": false, 00:12:56.257 "num_base_bdevs": 3, 00:12:56.257 "num_base_bdevs_discovered": 3, 00:12:56.257 "num_base_bdevs_operational": 3, 00:12:56.257 "base_bdevs_list": [ 00:12:56.257 { 00:12:56.257 "name": "BaseBdev1", 00:12:56.257 "uuid": "dd315935-13cc-4699-8e93-85bd8720f2d6", 00:12:56.257 "is_configured": true, 00:12:56.257 "data_offset": 0, 00:12:56.257 "data_size": 65536 00:12:56.257 }, 00:12:56.257 { 00:12:56.257 "name": "BaseBdev2", 00:12:56.257 "uuid": "9fe8fe73-4f98-4e43-a2c9-1aff88ab6e3f", 00:12:56.257 "is_configured": true, 00:12:56.257 "data_offset": 0, 00:12:56.257 "data_size": 65536 00:12:56.257 }, 00:12:56.257 { 00:12:56.257 "name": "BaseBdev3", 00:12:56.257 "uuid": "bc401860-635a-4d90-9b0e-131955259c96", 00:12:56.257 "is_configured": true, 00:12:56.257 "data_offset": 0, 00:12:56.257 "data_size": 65536 00:12:56.257 } 00:12:56.257 ] 00:12:56.257 }' 00:12:56.257 05:02:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.257 05:02:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.580 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:56.580 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:56.580 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:56.580 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:56.580 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:56.580 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:56.580 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:56.580 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:56.580 05:02:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.580 05:02:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.580 [2024-12-14 05:02:07.344456] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:56.580 05:02:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.580 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:56.580 "name": "Existed_Raid", 00:12:56.580 "aliases": [ 00:12:56.580 "88f4c993-a809-45fa-aab2-1d26102cbfb3" 00:12:56.580 ], 00:12:56.580 "product_name": "Raid Volume", 00:12:56.580 "block_size": 512, 00:12:56.580 "num_blocks": 131072, 00:12:56.580 "uuid": "88f4c993-a809-45fa-aab2-1d26102cbfb3", 00:12:56.580 "assigned_rate_limits": { 00:12:56.580 "rw_ios_per_sec": 0, 00:12:56.580 "rw_mbytes_per_sec": 0, 00:12:56.580 "r_mbytes_per_sec": 0, 00:12:56.580 "w_mbytes_per_sec": 0 00:12:56.580 }, 00:12:56.580 "claimed": false, 00:12:56.580 "zoned": false, 00:12:56.580 "supported_io_types": { 00:12:56.580 "read": true, 00:12:56.580 "write": true, 00:12:56.580 "unmap": false, 00:12:56.580 "flush": false, 00:12:56.580 "reset": true, 00:12:56.580 "nvme_admin": false, 00:12:56.580 "nvme_io": false, 00:12:56.580 "nvme_io_md": false, 00:12:56.580 "write_zeroes": true, 00:12:56.580 "zcopy": false, 00:12:56.580 "get_zone_info": false, 00:12:56.580 "zone_management": false, 00:12:56.580 "zone_append": false, 00:12:56.580 "compare": false, 00:12:56.580 "compare_and_write": false, 00:12:56.580 "abort": false, 00:12:56.580 "seek_hole": false, 00:12:56.580 "seek_data": false, 00:12:56.580 "copy": false, 00:12:56.580 "nvme_iov_md": false 00:12:56.580 }, 00:12:56.580 "driver_specific": { 00:12:56.580 "raid": { 00:12:56.580 "uuid": "88f4c993-a809-45fa-aab2-1d26102cbfb3", 00:12:56.580 "strip_size_kb": 64, 00:12:56.580 "state": "online", 00:12:56.580 "raid_level": "raid5f", 00:12:56.580 "superblock": false, 00:12:56.580 "num_base_bdevs": 3, 00:12:56.580 "num_base_bdevs_discovered": 3, 00:12:56.580 "num_base_bdevs_operational": 3, 00:12:56.580 "base_bdevs_list": [ 00:12:56.580 { 00:12:56.580 "name": "BaseBdev1", 00:12:56.580 "uuid": "dd315935-13cc-4699-8e93-85bd8720f2d6", 00:12:56.580 "is_configured": true, 00:12:56.580 "data_offset": 0, 00:12:56.580 "data_size": 65536 00:12:56.580 }, 00:12:56.580 { 00:12:56.580 "name": "BaseBdev2", 00:12:56.580 "uuid": "9fe8fe73-4f98-4e43-a2c9-1aff88ab6e3f", 00:12:56.580 "is_configured": true, 00:12:56.580 "data_offset": 0, 00:12:56.580 "data_size": 65536 00:12:56.580 }, 00:12:56.580 { 00:12:56.580 "name": "BaseBdev3", 00:12:56.580 "uuid": "bc401860-635a-4d90-9b0e-131955259c96", 00:12:56.580 "is_configured": true, 00:12:56.580 "data_offset": 0, 00:12:56.580 "data_size": 65536 00:12:56.580 } 00:12:56.580 ] 00:12:56.580 } 00:12:56.580 } 00:12:56.580 }' 00:12:56.580 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:56.580 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:56.580 BaseBdev2 00:12:56.580 BaseBdev3' 00:12:56.580 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.892 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:56.892 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.892 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.892 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.893 [2024-12-14 05:02:07.595888] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.893 "name": "Existed_Raid", 00:12:56.893 "uuid": "88f4c993-a809-45fa-aab2-1d26102cbfb3", 00:12:56.893 "strip_size_kb": 64, 00:12:56.893 "state": "online", 00:12:56.893 "raid_level": "raid5f", 00:12:56.893 "superblock": false, 00:12:56.893 "num_base_bdevs": 3, 00:12:56.893 "num_base_bdevs_discovered": 2, 00:12:56.893 "num_base_bdevs_operational": 2, 00:12:56.893 "base_bdevs_list": [ 00:12:56.893 { 00:12:56.893 "name": null, 00:12:56.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.893 "is_configured": false, 00:12:56.893 "data_offset": 0, 00:12:56.893 "data_size": 65536 00:12:56.893 }, 00:12:56.893 { 00:12:56.893 "name": "BaseBdev2", 00:12:56.893 "uuid": "9fe8fe73-4f98-4e43-a2c9-1aff88ab6e3f", 00:12:56.893 "is_configured": true, 00:12:56.893 "data_offset": 0, 00:12:56.893 "data_size": 65536 00:12:56.893 }, 00:12:56.893 { 00:12:56.893 "name": "BaseBdev3", 00:12:56.893 "uuid": "bc401860-635a-4d90-9b0e-131955259c96", 00:12:56.893 "is_configured": true, 00:12:56.893 "data_offset": 0, 00:12:56.893 "data_size": 65536 00:12:56.893 } 00:12:56.893 ] 00:12:56.893 }' 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.893 05:02:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.160 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:57.160 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:57.160 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.160 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:57.160 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.160 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.421 [2024-12-14 05:02:08.090297] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:57.421 [2024-12-14 05:02:08.090448] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:57.421 [2024-12-14 05:02:08.101752] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.421 [2024-12-14 05:02:08.161673] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:57.421 [2024-12-14 05:02:08.161757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.421 BaseBdev2 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.421 [ 00:12:57.421 { 00:12:57.421 "name": "BaseBdev2", 00:12:57.421 "aliases": [ 00:12:57.421 "c611ca1d-36d3-4d86-a302-16923489ccac" 00:12:57.421 ], 00:12:57.421 "product_name": "Malloc disk", 00:12:57.421 "block_size": 512, 00:12:57.421 "num_blocks": 65536, 00:12:57.421 "uuid": "c611ca1d-36d3-4d86-a302-16923489ccac", 00:12:57.421 "assigned_rate_limits": { 00:12:57.421 "rw_ios_per_sec": 0, 00:12:57.421 "rw_mbytes_per_sec": 0, 00:12:57.421 "r_mbytes_per_sec": 0, 00:12:57.421 "w_mbytes_per_sec": 0 00:12:57.421 }, 00:12:57.421 "claimed": false, 00:12:57.421 "zoned": false, 00:12:57.421 "supported_io_types": { 00:12:57.421 "read": true, 00:12:57.421 "write": true, 00:12:57.421 "unmap": true, 00:12:57.421 "flush": true, 00:12:57.421 "reset": true, 00:12:57.421 "nvme_admin": false, 00:12:57.421 "nvme_io": false, 00:12:57.421 "nvme_io_md": false, 00:12:57.421 "write_zeroes": true, 00:12:57.421 "zcopy": true, 00:12:57.421 "get_zone_info": false, 00:12:57.421 "zone_management": false, 00:12:57.421 "zone_append": false, 00:12:57.421 "compare": false, 00:12:57.421 "compare_and_write": false, 00:12:57.421 "abort": true, 00:12:57.421 "seek_hole": false, 00:12:57.421 "seek_data": false, 00:12:57.421 "copy": true, 00:12:57.421 "nvme_iov_md": false 00:12:57.421 }, 00:12:57.421 "memory_domains": [ 00:12:57.421 { 00:12:57.421 "dma_device_id": "system", 00:12:57.421 "dma_device_type": 1 00:12:57.421 }, 00:12:57.421 { 00:12:57.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.421 "dma_device_type": 2 00:12:57.421 } 00:12:57.421 ], 00:12:57.421 "driver_specific": {} 00:12:57.421 } 00:12:57.421 ] 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.421 BaseBdev3 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.421 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.422 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:57.422 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.422 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.682 [ 00:12:57.682 { 00:12:57.682 "name": "BaseBdev3", 00:12:57.682 "aliases": [ 00:12:57.682 "6a4ae62c-89e4-496c-90ba-75e2a877a557" 00:12:57.682 ], 00:12:57.682 "product_name": "Malloc disk", 00:12:57.682 "block_size": 512, 00:12:57.682 "num_blocks": 65536, 00:12:57.682 "uuid": "6a4ae62c-89e4-496c-90ba-75e2a877a557", 00:12:57.682 "assigned_rate_limits": { 00:12:57.682 "rw_ios_per_sec": 0, 00:12:57.682 "rw_mbytes_per_sec": 0, 00:12:57.682 "r_mbytes_per_sec": 0, 00:12:57.682 "w_mbytes_per_sec": 0 00:12:57.682 }, 00:12:57.682 "claimed": false, 00:12:57.682 "zoned": false, 00:12:57.682 "supported_io_types": { 00:12:57.682 "read": true, 00:12:57.682 "write": true, 00:12:57.682 "unmap": true, 00:12:57.682 "flush": true, 00:12:57.682 "reset": true, 00:12:57.682 "nvme_admin": false, 00:12:57.682 "nvme_io": false, 00:12:57.682 "nvme_io_md": false, 00:12:57.682 "write_zeroes": true, 00:12:57.682 "zcopy": true, 00:12:57.682 "get_zone_info": false, 00:12:57.682 "zone_management": false, 00:12:57.682 "zone_append": false, 00:12:57.682 "compare": false, 00:12:57.682 "compare_and_write": false, 00:12:57.682 "abort": true, 00:12:57.682 "seek_hole": false, 00:12:57.682 "seek_data": false, 00:12:57.682 "copy": true, 00:12:57.682 "nvme_iov_md": false 00:12:57.682 }, 00:12:57.682 "memory_domains": [ 00:12:57.682 { 00:12:57.682 "dma_device_id": "system", 00:12:57.682 "dma_device_type": 1 00:12:57.682 }, 00:12:57.682 { 00:12:57.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.682 "dma_device_type": 2 00:12:57.682 } 00:12:57.682 ], 00:12:57.682 "driver_specific": {} 00:12:57.682 } 00:12:57.682 ] 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.682 [2024-12-14 05:02:08.331887] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:57.682 [2024-12-14 05:02:08.331969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:57.682 [2024-12-14 05:02:08.332009] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:57.682 [2024-12-14 05:02:08.333826] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.682 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.682 "name": "Existed_Raid", 00:12:57.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.682 "strip_size_kb": 64, 00:12:57.682 "state": "configuring", 00:12:57.682 "raid_level": "raid5f", 00:12:57.682 "superblock": false, 00:12:57.682 "num_base_bdevs": 3, 00:12:57.682 "num_base_bdevs_discovered": 2, 00:12:57.682 "num_base_bdevs_operational": 3, 00:12:57.682 "base_bdevs_list": [ 00:12:57.682 { 00:12:57.682 "name": "BaseBdev1", 00:12:57.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.682 "is_configured": false, 00:12:57.682 "data_offset": 0, 00:12:57.682 "data_size": 0 00:12:57.682 }, 00:12:57.682 { 00:12:57.682 "name": "BaseBdev2", 00:12:57.683 "uuid": "c611ca1d-36d3-4d86-a302-16923489ccac", 00:12:57.683 "is_configured": true, 00:12:57.683 "data_offset": 0, 00:12:57.683 "data_size": 65536 00:12:57.683 }, 00:12:57.683 { 00:12:57.683 "name": "BaseBdev3", 00:12:57.683 "uuid": "6a4ae62c-89e4-496c-90ba-75e2a877a557", 00:12:57.683 "is_configured": true, 00:12:57.683 "data_offset": 0, 00:12:57.683 "data_size": 65536 00:12:57.683 } 00:12:57.683 ] 00:12:57.683 }' 00:12:57.683 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.683 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.942 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:57.942 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.942 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.942 [2024-12-14 05:02:08.815028] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:57.942 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.943 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:57.943 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.943 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.943 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:57.943 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.943 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.943 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.943 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.943 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.943 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.203 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.203 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.203 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.203 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.203 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.203 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.203 "name": "Existed_Raid", 00:12:58.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.203 "strip_size_kb": 64, 00:12:58.203 "state": "configuring", 00:12:58.203 "raid_level": "raid5f", 00:12:58.203 "superblock": false, 00:12:58.203 "num_base_bdevs": 3, 00:12:58.203 "num_base_bdevs_discovered": 1, 00:12:58.203 "num_base_bdevs_operational": 3, 00:12:58.203 "base_bdevs_list": [ 00:12:58.203 { 00:12:58.203 "name": "BaseBdev1", 00:12:58.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.203 "is_configured": false, 00:12:58.203 "data_offset": 0, 00:12:58.203 "data_size": 0 00:12:58.203 }, 00:12:58.203 { 00:12:58.203 "name": null, 00:12:58.203 "uuid": "c611ca1d-36d3-4d86-a302-16923489ccac", 00:12:58.203 "is_configured": false, 00:12:58.203 "data_offset": 0, 00:12:58.203 "data_size": 65536 00:12:58.203 }, 00:12:58.203 { 00:12:58.203 "name": "BaseBdev3", 00:12:58.203 "uuid": "6a4ae62c-89e4-496c-90ba-75e2a877a557", 00:12:58.203 "is_configured": true, 00:12:58.203 "data_offset": 0, 00:12:58.203 "data_size": 65536 00:12:58.203 } 00:12:58.203 ] 00:12:58.203 }' 00:12:58.203 05:02:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.203 05:02:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.463 [2024-12-14 05:02:09.301343] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:58.463 BaseBdev1 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.463 [ 00:12:58.463 { 00:12:58.463 "name": "BaseBdev1", 00:12:58.463 "aliases": [ 00:12:58.463 "640b4357-8fa4-4d66-8c7b-e20c6ada8613" 00:12:58.463 ], 00:12:58.463 "product_name": "Malloc disk", 00:12:58.463 "block_size": 512, 00:12:58.463 "num_blocks": 65536, 00:12:58.463 "uuid": "640b4357-8fa4-4d66-8c7b-e20c6ada8613", 00:12:58.463 "assigned_rate_limits": { 00:12:58.463 "rw_ios_per_sec": 0, 00:12:58.463 "rw_mbytes_per_sec": 0, 00:12:58.463 "r_mbytes_per_sec": 0, 00:12:58.463 "w_mbytes_per_sec": 0 00:12:58.463 }, 00:12:58.463 "claimed": true, 00:12:58.463 "claim_type": "exclusive_write", 00:12:58.463 "zoned": false, 00:12:58.463 "supported_io_types": { 00:12:58.463 "read": true, 00:12:58.463 "write": true, 00:12:58.463 "unmap": true, 00:12:58.463 "flush": true, 00:12:58.463 "reset": true, 00:12:58.463 "nvme_admin": false, 00:12:58.463 "nvme_io": false, 00:12:58.463 "nvme_io_md": false, 00:12:58.463 "write_zeroes": true, 00:12:58.463 "zcopy": true, 00:12:58.463 "get_zone_info": false, 00:12:58.463 "zone_management": false, 00:12:58.463 "zone_append": false, 00:12:58.463 "compare": false, 00:12:58.463 "compare_and_write": false, 00:12:58.463 "abort": true, 00:12:58.463 "seek_hole": false, 00:12:58.463 "seek_data": false, 00:12:58.463 "copy": true, 00:12:58.463 "nvme_iov_md": false 00:12:58.463 }, 00:12:58.463 "memory_domains": [ 00:12:58.463 { 00:12:58.463 "dma_device_id": "system", 00:12:58.463 "dma_device_type": 1 00:12:58.463 }, 00:12:58.463 { 00:12:58.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.463 "dma_device_type": 2 00:12:58.463 } 00:12:58.463 ], 00:12:58.463 "driver_specific": {} 00:12:58.463 } 00:12:58.463 ] 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.463 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.723 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.723 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.723 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.723 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.723 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.723 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.723 "name": "Existed_Raid", 00:12:58.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.723 "strip_size_kb": 64, 00:12:58.723 "state": "configuring", 00:12:58.723 "raid_level": "raid5f", 00:12:58.723 "superblock": false, 00:12:58.723 "num_base_bdevs": 3, 00:12:58.723 "num_base_bdevs_discovered": 2, 00:12:58.723 "num_base_bdevs_operational": 3, 00:12:58.723 "base_bdevs_list": [ 00:12:58.723 { 00:12:58.723 "name": "BaseBdev1", 00:12:58.723 "uuid": "640b4357-8fa4-4d66-8c7b-e20c6ada8613", 00:12:58.723 "is_configured": true, 00:12:58.723 "data_offset": 0, 00:12:58.723 "data_size": 65536 00:12:58.723 }, 00:12:58.723 { 00:12:58.723 "name": null, 00:12:58.723 "uuid": "c611ca1d-36d3-4d86-a302-16923489ccac", 00:12:58.723 "is_configured": false, 00:12:58.723 "data_offset": 0, 00:12:58.723 "data_size": 65536 00:12:58.723 }, 00:12:58.723 { 00:12:58.723 "name": "BaseBdev3", 00:12:58.723 "uuid": "6a4ae62c-89e4-496c-90ba-75e2a877a557", 00:12:58.723 "is_configured": true, 00:12:58.723 "data_offset": 0, 00:12:58.723 "data_size": 65536 00:12:58.723 } 00:12:58.723 ] 00:12:58.723 }' 00:12:58.723 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.723 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.983 [2024-12-14 05:02:09.816473] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.983 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.242 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.242 "name": "Existed_Raid", 00:12:59.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.242 "strip_size_kb": 64, 00:12:59.242 "state": "configuring", 00:12:59.242 "raid_level": "raid5f", 00:12:59.242 "superblock": false, 00:12:59.242 "num_base_bdevs": 3, 00:12:59.242 "num_base_bdevs_discovered": 1, 00:12:59.242 "num_base_bdevs_operational": 3, 00:12:59.242 "base_bdevs_list": [ 00:12:59.242 { 00:12:59.242 "name": "BaseBdev1", 00:12:59.242 "uuid": "640b4357-8fa4-4d66-8c7b-e20c6ada8613", 00:12:59.242 "is_configured": true, 00:12:59.242 "data_offset": 0, 00:12:59.242 "data_size": 65536 00:12:59.242 }, 00:12:59.242 { 00:12:59.242 "name": null, 00:12:59.242 "uuid": "c611ca1d-36d3-4d86-a302-16923489ccac", 00:12:59.242 "is_configured": false, 00:12:59.242 "data_offset": 0, 00:12:59.242 "data_size": 65536 00:12:59.242 }, 00:12:59.242 { 00:12:59.242 "name": null, 00:12:59.242 "uuid": "6a4ae62c-89e4-496c-90ba-75e2a877a557", 00:12:59.242 "is_configured": false, 00:12:59.242 "data_offset": 0, 00:12:59.242 "data_size": 65536 00:12:59.242 } 00:12:59.242 ] 00:12:59.242 }' 00:12:59.242 05:02:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.242 05:02:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.502 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.502 05:02:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.502 05:02:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.502 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:59.502 05:02:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.502 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:59.502 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:59.502 05:02:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.502 05:02:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.502 [2024-12-14 05:02:10.335623] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:59.503 05:02:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.503 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:59.503 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.503 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.503 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:59.503 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.503 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.503 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.503 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.503 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.503 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.503 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.503 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.503 05:02:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.503 05:02:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.503 05:02:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.762 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.762 "name": "Existed_Raid", 00:12:59.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.762 "strip_size_kb": 64, 00:12:59.762 "state": "configuring", 00:12:59.762 "raid_level": "raid5f", 00:12:59.762 "superblock": false, 00:12:59.762 "num_base_bdevs": 3, 00:12:59.762 "num_base_bdevs_discovered": 2, 00:12:59.762 "num_base_bdevs_operational": 3, 00:12:59.762 "base_bdevs_list": [ 00:12:59.762 { 00:12:59.763 "name": "BaseBdev1", 00:12:59.763 "uuid": "640b4357-8fa4-4d66-8c7b-e20c6ada8613", 00:12:59.763 "is_configured": true, 00:12:59.763 "data_offset": 0, 00:12:59.763 "data_size": 65536 00:12:59.763 }, 00:12:59.763 { 00:12:59.763 "name": null, 00:12:59.763 "uuid": "c611ca1d-36d3-4d86-a302-16923489ccac", 00:12:59.763 "is_configured": false, 00:12:59.763 "data_offset": 0, 00:12:59.763 "data_size": 65536 00:12:59.763 }, 00:12:59.763 { 00:12:59.763 "name": "BaseBdev3", 00:12:59.763 "uuid": "6a4ae62c-89e4-496c-90ba-75e2a877a557", 00:12:59.763 "is_configured": true, 00:12:59.763 "data_offset": 0, 00:12:59.763 "data_size": 65536 00:12:59.763 } 00:12:59.763 ] 00:12:59.763 }' 00:12:59.763 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.763 05:02:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.022 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.022 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:00.022 05:02:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.022 05:02:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.022 05:02:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.022 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:00.022 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:00.022 05:02:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.022 05:02:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.022 [2024-12-14 05:02:10.830812] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:00.022 05:02:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.022 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:00.022 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.023 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.023 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:00.023 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.023 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.023 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.023 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.023 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.023 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.023 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.023 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.023 05:02:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.023 05:02:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.023 05:02:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.023 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.023 "name": "Existed_Raid", 00:13:00.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.023 "strip_size_kb": 64, 00:13:00.023 "state": "configuring", 00:13:00.023 "raid_level": "raid5f", 00:13:00.023 "superblock": false, 00:13:00.023 "num_base_bdevs": 3, 00:13:00.023 "num_base_bdevs_discovered": 1, 00:13:00.023 "num_base_bdevs_operational": 3, 00:13:00.023 "base_bdevs_list": [ 00:13:00.023 { 00:13:00.023 "name": null, 00:13:00.023 "uuid": "640b4357-8fa4-4d66-8c7b-e20c6ada8613", 00:13:00.023 "is_configured": false, 00:13:00.023 "data_offset": 0, 00:13:00.023 "data_size": 65536 00:13:00.023 }, 00:13:00.023 { 00:13:00.023 "name": null, 00:13:00.023 "uuid": "c611ca1d-36d3-4d86-a302-16923489ccac", 00:13:00.023 "is_configured": false, 00:13:00.023 "data_offset": 0, 00:13:00.023 "data_size": 65536 00:13:00.023 }, 00:13:00.023 { 00:13:00.023 "name": "BaseBdev3", 00:13:00.023 "uuid": "6a4ae62c-89e4-496c-90ba-75e2a877a557", 00:13:00.023 "is_configured": true, 00:13:00.023 "data_offset": 0, 00:13:00.023 "data_size": 65536 00:13:00.023 } 00:13:00.023 ] 00:13:00.023 }' 00:13:00.023 05:02:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.023 05:02:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.591 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.591 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.591 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:00.591 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.592 [2024-12-14 05:02:11.340598] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.592 "name": "Existed_Raid", 00:13:00.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.592 "strip_size_kb": 64, 00:13:00.592 "state": "configuring", 00:13:00.592 "raid_level": "raid5f", 00:13:00.592 "superblock": false, 00:13:00.592 "num_base_bdevs": 3, 00:13:00.592 "num_base_bdevs_discovered": 2, 00:13:00.592 "num_base_bdevs_operational": 3, 00:13:00.592 "base_bdevs_list": [ 00:13:00.592 { 00:13:00.592 "name": null, 00:13:00.592 "uuid": "640b4357-8fa4-4d66-8c7b-e20c6ada8613", 00:13:00.592 "is_configured": false, 00:13:00.592 "data_offset": 0, 00:13:00.592 "data_size": 65536 00:13:00.592 }, 00:13:00.592 { 00:13:00.592 "name": "BaseBdev2", 00:13:00.592 "uuid": "c611ca1d-36d3-4d86-a302-16923489ccac", 00:13:00.592 "is_configured": true, 00:13:00.592 "data_offset": 0, 00:13:00.592 "data_size": 65536 00:13:00.592 }, 00:13:00.592 { 00:13:00.592 "name": "BaseBdev3", 00:13:00.592 "uuid": "6a4ae62c-89e4-496c-90ba-75e2a877a557", 00:13:00.592 "is_configured": true, 00:13:00.592 "data_offset": 0, 00:13:00.592 "data_size": 65536 00:13:00.592 } 00:13:00.592 ] 00:13:00.592 }' 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.592 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 640b4357-8fa4-4d66-8c7b-e20c6ada8613 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.161 [2024-12-14 05:02:11.934466] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:01.161 [2024-12-14 05:02:11.934562] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:01.161 [2024-12-14 05:02:11.934588] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:01.161 [2024-12-14 05:02:11.934878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:01.161 [2024-12-14 05:02:11.935360] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:01.161 [2024-12-14 05:02:11.935412] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:13:01.161 [2024-12-14 05:02:11.935624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.161 NewBaseBdev 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.161 [ 00:13:01.161 { 00:13:01.161 "name": "NewBaseBdev", 00:13:01.161 "aliases": [ 00:13:01.161 "640b4357-8fa4-4d66-8c7b-e20c6ada8613" 00:13:01.161 ], 00:13:01.161 "product_name": "Malloc disk", 00:13:01.161 "block_size": 512, 00:13:01.161 "num_blocks": 65536, 00:13:01.161 "uuid": "640b4357-8fa4-4d66-8c7b-e20c6ada8613", 00:13:01.161 "assigned_rate_limits": { 00:13:01.161 "rw_ios_per_sec": 0, 00:13:01.161 "rw_mbytes_per_sec": 0, 00:13:01.161 "r_mbytes_per_sec": 0, 00:13:01.161 "w_mbytes_per_sec": 0 00:13:01.161 }, 00:13:01.161 "claimed": true, 00:13:01.161 "claim_type": "exclusive_write", 00:13:01.161 "zoned": false, 00:13:01.161 "supported_io_types": { 00:13:01.161 "read": true, 00:13:01.161 "write": true, 00:13:01.161 "unmap": true, 00:13:01.161 "flush": true, 00:13:01.161 "reset": true, 00:13:01.161 "nvme_admin": false, 00:13:01.161 "nvme_io": false, 00:13:01.161 "nvme_io_md": false, 00:13:01.161 "write_zeroes": true, 00:13:01.161 "zcopy": true, 00:13:01.161 "get_zone_info": false, 00:13:01.161 "zone_management": false, 00:13:01.161 "zone_append": false, 00:13:01.161 "compare": false, 00:13:01.161 "compare_and_write": false, 00:13:01.161 "abort": true, 00:13:01.161 "seek_hole": false, 00:13:01.161 "seek_data": false, 00:13:01.161 "copy": true, 00:13:01.161 "nvme_iov_md": false 00:13:01.161 }, 00:13:01.161 "memory_domains": [ 00:13:01.161 { 00:13:01.161 "dma_device_id": "system", 00:13:01.161 "dma_device_type": 1 00:13:01.161 }, 00:13:01.161 { 00:13:01.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.161 "dma_device_type": 2 00:13:01.161 } 00:13:01.161 ], 00:13:01.161 "driver_specific": {} 00:13:01.161 } 00:13:01.161 ] 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.161 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.162 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.162 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.162 05:02:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.162 05:02:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.162 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.162 "name": "Existed_Raid", 00:13:01.162 "uuid": "4f2642f1-d0f4-4b8b-a684-c7553c64dbad", 00:13:01.162 "strip_size_kb": 64, 00:13:01.162 "state": "online", 00:13:01.162 "raid_level": "raid5f", 00:13:01.162 "superblock": false, 00:13:01.162 "num_base_bdevs": 3, 00:13:01.162 "num_base_bdevs_discovered": 3, 00:13:01.162 "num_base_bdevs_operational": 3, 00:13:01.162 "base_bdevs_list": [ 00:13:01.162 { 00:13:01.162 "name": "NewBaseBdev", 00:13:01.162 "uuid": "640b4357-8fa4-4d66-8c7b-e20c6ada8613", 00:13:01.162 "is_configured": true, 00:13:01.162 "data_offset": 0, 00:13:01.162 "data_size": 65536 00:13:01.162 }, 00:13:01.162 { 00:13:01.162 "name": "BaseBdev2", 00:13:01.162 "uuid": "c611ca1d-36d3-4d86-a302-16923489ccac", 00:13:01.162 "is_configured": true, 00:13:01.162 "data_offset": 0, 00:13:01.162 "data_size": 65536 00:13:01.162 }, 00:13:01.162 { 00:13:01.162 "name": "BaseBdev3", 00:13:01.162 "uuid": "6a4ae62c-89e4-496c-90ba-75e2a877a557", 00:13:01.162 "is_configured": true, 00:13:01.162 "data_offset": 0, 00:13:01.162 "data_size": 65536 00:13:01.162 } 00:13:01.162 ] 00:13:01.162 }' 00:13:01.162 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.162 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.730 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:01.730 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:01.730 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:01.730 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:01.730 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:01.730 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:01.730 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:01.730 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:01.730 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.730 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.730 [2024-12-14 05:02:12.461758] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.730 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.730 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:01.730 "name": "Existed_Raid", 00:13:01.730 "aliases": [ 00:13:01.730 "4f2642f1-d0f4-4b8b-a684-c7553c64dbad" 00:13:01.730 ], 00:13:01.730 "product_name": "Raid Volume", 00:13:01.730 "block_size": 512, 00:13:01.730 "num_blocks": 131072, 00:13:01.730 "uuid": "4f2642f1-d0f4-4b8b-a684-c7553c64dbad", 00:13:01.730 "assigned_rate_limits": { 00:13:01.730 "rw_ios_per_sec": 0, 00:13:01.730 "rw_mbytes_per_sec": 0, 00:13:01.730 "r_mbytes_per_sec": 0, 00:13:01.730 "w_mbytes_per_sec": 0 00:13:01.730 }, 00:13:01.730 "claimed": false, 00:13:01.730 "zoned": false, 00:13:01.730 "supported_io_types": { 00:13:01.730 "read": true, 00:13:01.730 "write": true, 00:13:01.730 "unmap": false, 00:13:01.730 "flush": false, 00:13:01.730 "reset": true, 00:13:01.730 "nvme_admin": false, 00:13:01.730 "nvme_io": false, 00:13:01.730 "nvme_io_md": false, 00:13:01.730 "write_zeroes": true, 00:13:01.730 "zcopy": false, 00:13:01.730 "get_zone_info": false, 00:13:01.730 "zone_management": false, 00:13:01.730 "zone_append": false, 00:13:01.730 "compare": false, 00:13:01.730 "compare_and_write": false, 00:13:01.730 "abort": false, 00:13:01.730 "seek_hole": false, 00:13:01.730 "seek_data": false, 00:13:01.730 "copy": false, 00:13:01.730 "nvme_iov_md": false 00:13:01.730 }, 00:13:01.730 "driver_specific": { 00:13:01.730 "raid": { 00:13:01.730 "uuid": "4f2642f1-d0f4-4b8b-a684-c7553c64dbad", 00:13:01.730 "strip_size_kb": 64, 00:13:01.730 "state": "online", 00:13:01.730 "raid_level": "raid5f", 00:13:01.730 "superblock": false, 00:13:01.730 "num_base_bdevs": 3, 00:13:01.730 "num_base_bdevs_discovered": 3, 00:13:01.730 "num_base_bdevs_operational": 3, 00:13:01.730 "base_bdevs_list": [ 00:13:01.730 { 00:13:01.730 "name": "NewBaseBdev", 00:13:01.730 "uuid": "640b4357-8fa4-4d66-8c7b-e20c6ada8613", 00:13:01.730 "is_configured": true, 00:13:01.730 "data_offset": 0, 00:13:01.730 "data_size": 65536 00:13:01.730 }, 00:13:01.730 { 00:13:01.730 "name": "BaseBdev2", 00:13:01.730 "uuid": "c611ca1d-36d3-4d86-a302-16923489ccac", 00:13:01.730 "is_configured": true, 00:13:01.730 "data_offset": 0, 00:13:01.730 "data_size": 65536 00:13:01.730 }, 00:13:01.730 { 00:13:01.730 "name": "BaseBdev3", 00:13:01.730 "uuid": "6a4ae62c-89e4-496c-90ba-75e2a877a557", 00:13:01.730 "is_configured": true, 00:13:01.730 "data_offset": 0, 00:13:01.730 "data_size": 65536 00:13:01.730 } 00:13:01.730 ] 00:13:01.730 } 00:13:01.730 } 00:13:01.730 }' 00:13:01.730 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:01.730 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:01.730 BaseBdev2 00:13:01.730 BaseBdev3' 00:13:01.730 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.730 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:01.730 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.730 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:01.730 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.730 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.730 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.730 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.991 [2024-12-14 05:02:12.737133] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:01.991 [2024-12-14 05:02:12.737202] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:01.991 [2024-12-14 05:02:12.737297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:01.991 [2024-12-14 05:02:12.737551] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:01.991 [2024-12-14 05:02:12.737607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90455 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 90455 ']' 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 90455 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90455 00:13:01.991 killing process with pid 90455 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90455' 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 90455 00:13:01.991 [2024-12-14 05:02:12.784189] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:01.991 05:02:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 90455 00:13:01.991 [2024-12-14 05:02:12.815457] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:02.251 ************************************ 00:13:02.251 END TEST raid5f_state_function_test 00:13:02.251 ************************************ 00:13:02.251 05:02:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:02.251 00:13:02.251 real 0m9.098s 00:13:02.251 user 0m15.446s 00:13:02.251 sys 0m1.990s 00:13:02.251 05:02:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:02.251 05:02:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.251 05:02:13 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:13:02.251 05:02:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:02.251 05:02:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:02.251 05:02:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:02.512 ************************************ 00:13:02.512 START TEST raid5f_state_function_test_sb 00:13:02.512 ************************************ 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=91065 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 91065' 00:13:02.512 Process raid pid: 91065 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 91065 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 91065 ']' 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:02.512 05:02:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.512 [2024-12-14 05:02:13.246901] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:02.512 [2024-12-14 05:02:13.247133] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.772 [2024-12-14 05:02:13.408585] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.772 [2024-12-14 05:02:13.455818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.772 [2024-12-14 05:02:13.498871] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:02.772 [2024-12-14 05:02:13.498906] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.342 [2024-12-14 05:02:14.064343] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:03.342 [2024-12-14 05:02:14.064448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:03.342 [2024-12-14 05:02:14.064465] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:03.342 [2024-12-14 05:02:14.064476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:03.342 [2024-12-14 05:02:14.064482] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:03.342 [2024-12-14 05:02:14.064492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.342 "name": "Existed_Raid", 00:13:03.342 "uuid": "32ee6400-b079-4c7f-9cee-e219d9249388", 00:13:03.342 "strip_size_kb": 64, 00:13:03.342 "state": "configuring", 00:13:03.342 "raid_level": "raid5f", 00:13:03.342 "superblock": true, 00:13:03.342 "num_base_bdevs": 3, 00:13:03.342 "num_base_bdevs_discovered": 0, 00:13:03.342 "num_base_bdevs_operational": 3, 00:13:03.342 "base_bdevs_list": [ 00:13:03.342 { 00:13:03.342 "name": "BaseBdev1", 00:13:03.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.342 "is_configured": false, 00:13:03.342 "data_offset": 0, 00:13:03.342 "data_size": 0 00:13:03.342 }, 00:13:03.342 { 00:13:03.342 "name": "BaseBdev2", 00:13:03.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.342 "is_configured": false, 00:13:03.342 "data_offset": 0, 00:13:03.342 "data_size": 0 00:13:03.342 }, 00:13:03.342 { 00:13:03.342 "name": "BaseBdev3", 00:13:03.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.342 "is_configured": false, 00:13:03.342 "data_offset": 0, 00:13:03.342 "data_size": 0 00:13:03.342 } 00:13:03.342 ] 00:13:03.342 }' 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.342 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.911 [2024-12-14 05:02:14.507458] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:03.911 [2024-12-14 05:02:14.507545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.911 [2024-12-14 05:02:14.519479] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:03.911 [2024-12-14 05:02:14.519553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:03.911 [2024-12-14 05:02:14.519566] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:03.911 [2024-12-14 05:02:14.519575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:03.911 [2024-12-14 05:02:14.519580] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:03.911 [2024-12-14 05:02:14.519589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.911 [2024-12-14 05:02:14.540403] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:03.911 BaseBdev1 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.911 [ 00:13:03.911 { 00:13:03.911 "name": "BaseBdev1", 00:13:03.911 "aliases": [ 00:13:03.911 "7e25e947-7602-49fd-b715-54e133af867e" 00:13:03.911 ], 00:13:03.911 "product_name": "Malloc disk", 00:13:03.911 "block_size": 512, 00:13:03.911 "num_blocks": 65536, 00:13:03.911 "uuid": "7e25e947-7602-49fd-b715-54e133af867e", 00:13:03.911 "assigned_rate_limits": { 00:13:03.911 "rw_ios_per_sec": 0, 00:13:03.911 "rw_mbytes_per_sec": 0, 00:13:03.911 "r_mbytes_per_sec": 0, 00:13:03.911 "w_mbytes_per_sec": 0 00:13:03.911 }, 00:13:03.911 "claimed": true, 00:13:03.911 "claim_type": "exclusive_write", 00:13:03.911 "zoned": false, 00:13:03.911 "supported_io_types": { 00:13:03.911 "read": true, 00:13:03.911 "write": true, 00:13:03.911 "unmap": true, 00:13:03.911 "flush": true, 00:13:03.911 "reset": true, 00:13:03.911 "nvme_admin": false, 00:13:03.911 "nvme_io": false, 00:13:03.911 "nvme_io_md": false, 00:13:03.911 "write_zeroes": true, 00:13:03.911 "zcopy": true, 00:13:03.911 "get_zone_info": false, 00:13:03.911 "zone_management": false, 00:13:03.911 "zone_append": false, 00:13:03.911 "compare": false, 00:13:03.911 "compare_and_write": false, 00:13:03.911 "abort": true, 00:13:03.911 "seek_hole": false, 00:13:03.911 "seek_data": false, 00:13:03.911 "copy": true, 00:13:03.911 "nvme_iov_md": false 00:13:03.911 }, 00:13:03.911 "memory_domains": [ 00:13:03.911 { 00:13:03.911 "dma_device_id": "system", 00:13:03.911 "dma_device_type": 1 00:13:03.911 }, 00:13:03.911 { 00:13:03.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.911 "dma_device_type": 2 00:13:03.911 } 00:13:03.911 ], 00:13:03.911 "driver_specific": {} 00:13:03.911 } 00:13:03.911 ] 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.911 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.912 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.912 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.912 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.912 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.912 "name": "Existed_Raid", 00:13:03.912 "uuid": "9515235e-49c0-4ed5-a58f-823df8d31a12", 00:13:03.912 "strip_size_kb": 64, 00:13:03.912 "state": "configuring", 00:13:03.912 "raid_level": "raid5f", 00:13:03.912 "superblock": true, 00:13:03.912 "num_base_bdevs": 3, 00:13:03.912 "num_base_bdevs_discovered": 1, 00:13:03.912 "num_base_bdevs_operational": 3, 00:13:03.912 "base_bdevs_list": [ 00:13:03.912 { 00:13:03.912 "name": "BaseBdev1", 00:13:03.912 "uuid": "7e25e947-7602-49fd-b715-54e133af867e", 00:13:03.912 "is_configured": true, 00:13:03.912 "data_offset": 2048, 00:13:03.912 "data_size": 63488 00:13:03.912 }, 00:13:03.912 { 00:13:03.912 "name": "BaseBdev2", 00:13:03.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.912 "is_configured": false, 00:13:03.912 "data_offset": 0, 00:13:03.912 "data_size": 0 00:13:03.912 }, 00:13:03.912 { 00:13:03.912 "name": "BaseBdev3", 00:13:03.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.912 "is_configured": false, 00:13:03.912 "data_offset": 0, 00:13:03.912 "data_size": 0 00:13:03.912 } 00:13:03.912 ] 00:13:03.912 }' 00:13:03.912 05:02:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.912 05:02:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.171 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:04.171 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.171 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.171 [2024-12-14 05:02:15.043570] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:04.171 [2024-12-14 05:02:15.043670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:04.171 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.171 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:04.171 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.171 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.430 [2024-12-14 05:02:15.055636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:04.430 [2024-12-14 05:02:15.057500] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:04.430 [2024-12-14 05:02:15.057574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:04.430 [2024-12-14 05:02:15.057603] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:04.430 [2024-12-14 05:02:15.057625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:04.430 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.430 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:04.430 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:04.430 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:04.430 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.430 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.430 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:04.430 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.430 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.430 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.430 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.430 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.430 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.430 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.430 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.430 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.430 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.430 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.430 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.430 "name": "Existed_Raid", 00:13:04.430 "uuid": "1c774efc-21f5-4aeb-a699-e38f951abf25", 00:13:04.430 "strip_size_kb": 64, 00:13:04.430 "state": "configuring", 00:13:04.430 "raid_level": "raid5f", 00:13:04.430 "superblock": true, 00:13:04.430 "num_base_bdevs": 3, 00:13:04.430 "num_base_bdevs_discovered": 1, 00:13:04.430 "num_base_bdevs_operational": 3, 00:13:04.430 "base_bdevs_list": [ 00:13:04.430 { 00:13:04.430 "name": "BaseBdev1", 00:13:04.430 "uuid": "7e25e947-7602-49fd-b715-54e133af867e", 00:13:04.430 "is_configured": true, 00:13:04.430 "data_offset": 2048, 00:13:04.430 "data_size": 63488 00:13:04.430 }, 00:13:04.430 { 00:13:04.430 "name": "BaseBdev2", 00:13:04.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.430 "is_configured": false, 00:13:04.430 "data_offset": 0, 00:13:04.430 "data_size": 0 00:13:04.430 }, 00:13:04.430 { 00:13:04.430 "name": "BaseBdev3", 00:13:04.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.430 "is_configured": false, 00:13:04.430 "data_offset": 0, 00:13:04.430 "data_size": 0 00:13:04.430 } 00:13:04.430 ] 00:13:04.430 }' 00:13:04.430 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.430 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.689 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:04.689 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.689 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.689 [2024-12-14 05:02:15.500017] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:04.689 BaseBdev2 00:13:04.689 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.689 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:04.689 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:04.689 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:04.689 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:04.689 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:04.689 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:04.689 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:04.689 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.689 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.689 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.689 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:04.689 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.689 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.689 [ 00:13:04.689 { 00:13:04.689 "name": "BaseBdev2", 00:13:04.689 "aliases": [ 00:13:04.689 "c5f6233c-8a17-4ebd-b7cf-0c2a6a8b32e6" 00:13:04.689 ], 00:13:04.689 "product_name": "Malloc disk", 00:13:04.689 "block_size": 512, 00:13:04.689 "num_blocks": 65536, 00:13:04.689 "uuid": "c5f6233c-8a17-4ebd-b7cf-0c2a6a8b32e6", 00:13:04.689 "assigned_rate_limits": { 00:13:04.689 "rw_ios_per_sec": 0, 00:13:04.689 "rw_mbytes_per_sec": 0, 00:13:04.689 "r_mbytes_per_sec": 0, 00:13:04.689 "w_mbytes_per_sec": 0 00:13:04.689 }, 00:13:04.689 "claimed": true, 00:13:04.689 "claim_type": "exclusive_write", 00:13:04.689 "zoned": false, 00:13:04.689 "supported_io_types": { 00:13:04.689 "read": true, 00:13:04.689 "write": true, 00:13:04.689 "unmap": true, 00:13:04.689 "flush": true, 00:13:04.689 "reset": true, 00:13:04.689 "nvme_admin": false, 00:13:04.689 "nvme_io": false, 00:13:04.689 "nvme_io_md": false, 00:13:04.689 "write_zeroes": true, 00:13:04.689 "zcopy": true, 00:13:04.689 "get_zone_info": false, 00:13:04.689 "zone_management": false, 00:13:04.689 "zone_append": false, 00:13:04.689 "compare": false, 00:13:04.689 "compare_and_write": false, 00:13:04.689 "abort": true, 00:13:04.689 "seek_hole": false, 00:13:04.689 "seek_data": false, 00:13:04.689 "copy": true, 00:13:04.689 "nvme_iov_md": false 00:13:04.689 }, 00:13:04.689 "memory_domains": [ 00:13:04.689 { 00:13:04.690 "dma_device_id": "system", 00:13:04.690 "dma_device_type": 1 00:13:04.690 }, 00:13:04.690 { 00:13:04.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.690 "dma_device_type": 2 00:13:04.690 } 00:13:04.690 ], 00:13:04.690 "driver_specific": {} 00:13:04.690 } 00:13:04.690 ] 00:13:04.690 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.690 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:04.690 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:04.690 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:04.690 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:04.690 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.690 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.690 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:04.690 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.690 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.690 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.690 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.690 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.690 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.690 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.690 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.690 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.690 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.690 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.949 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.949 "name": "Existed_Raid", 00:13:04.949 "uuid": "1c774efc-21f5-4aeb-a699-e38f951abf25", 00:13:04.949 "strip_size_kb": 64, 00:13:04.949 "state": "configuring", 00:13:04.949 "raid_level": "raid5f", 00:13:04.949 "superblock": true, 00:13:04.949 "num_base_bdevs": 3, 00:13:04.949 "num_base_bdevs_discovered": 2, 00:13:04.949 "num_base_bdevs_operational": 3, 00:13:04.949 "base_bdevs_list": [ 00:13:04.949 { 00:13:04.949 "name": "BaseBdev1", 00:13:04.949 "uuid": "7e25e947-7602-49fd-b715-54e133af867e", 00:13:04.949 "is_configured": true, 00:13:04.949 "data_offset": 2048, 00:13:04.949 "data_size": 63488 00:13:04.949 }, 00:13:04.949 { 00:13:04.949 "name": "BaseBdev2", 00:13:04.949 "uuid": "c5f6233c-8a17-4ebd-b7cf-0c2a6a8b32e6", 00:13:04.949 "is_configured": true, 00:13:04.949 "data_offset": 2048, 00:13:04.949 "data_size": 63488 00:13:04.949 }, 00:13:04.949 { 00:13:04.949 "name": "BaseBdev3", 00:13:04.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.949 "is_configured": false, 00:13:04.949 "data_offset": 0, 00:13:04.949 "data_size": 0 00:13:04.949 } 00:13:04.949 ] 00:13:04.949 }' 00:13:04.949 05:02:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.949 05:02:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.209 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.210 [2024-12-14 05:02:16.038079] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:05.210 [2024-12-14 05:02:16.038377] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:05.210 [2024-12-14 05:02:16.038447] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:05.210 BaseBdev3 00:13:05.210 [2024-12-14 05:02:16.038775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:05.210 [2024-12-14 05:02:16.039201] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:05.210 [2024-12-14 05:02:16.039266] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:05.210 [2024-12-14 05:02:16.039448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.210 [ 00:13:05.210 { 00:13:05.210 "name": "BaseBdev3", 00:13:05.210 "aliases": [ 00:13:05.210 "406a7873-427c-4282-90a1-2cf957443a8f" 00:13:05.210 ], 00:13:05.210 "product_name": "Malloc disk", 00:13:05.210 "block_size": 512, 00:13:05.210 "num_blocks": 65536, 00:13:05.210 "uuid": "406a7873-427c-4282-90a1-2cf957443a8f", 00:13:05.210 "assigned_rate_limits": { 00:13:05.210 "rw_ios_per_sec": 0, 00:13:05.210 "rw_mbytes_per_sec": 0, 00:13:05.210 "r_mbytes_per_sec": 0, 00:13:05.210 "w_mbytes_per_sec": 0 00:13:05.210 }, 00:13:05.210 "claimed": true, 00:13:05.210 "claim_type": "exclusive_write", 00:13:05.210 "zoned": false, 00:13:05.210 "supported_io_types": { 00:13:05.210 "read": true, 00:13:05.210 "write": true, 00:13:05.210 "unmap": true, 00:13:05.210 "flush": true, 00:13:05.210 "reset": true, 00:13:05.210 "nvme_admin": false, 00:13:05.210 "nvme_io": false, 00:13:05.210 "nvme_io_md": false, 00:13:05.210 "write_zeroes": true, 00:13:05.210 "zcopy": true, 00:13:05.210 "get_zone_info": false, 00:13:05.210 "zone_management": false, 00:13:05.210 "zone_append": false, 00:13:05.210 "compare": false, 00:13:05.210 "compare_and_write": false, 00:13:05.210 "abort": true, 00:13:05.210 "seek_hole": false, 00:13:05.210 "seek_data": false, 00:13:05.210 "copy": true, 00:13:05.210 "nvme_iov_md": false 00:13:05.210 }, 00:13:05.210 "memory_domains": [ 00:13:05.210 { 00:13:05.210 "dma_device_id": "system", 00:13:05.210 "dma_device_type": 1 00:13:05.210 }, 00:13:05.210 { 00:13:05.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.210 "dma_device_type": 2 00:13:05.210 } 00:13:05.210 ], 00:13:05.210 "driver_specific": {} 00:13:05.210 } 00:13:05.210 ] 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.210 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.470 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.470 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.470 "name": "Existed_Raid", 00:13:05.470 "uuid": "1c774efc-21f5-4aeb-a699-e38f951abf25", 00:13:05.470 "strip_size_kb": 64, 00:13:05.470 "state": "online", 00:13:05.470 "raid_level": "raid5f", 00:13:05.470 "superblock": true, 00:13:05.470 "num_base_bdevs": 3, 00:13:05.470 "num_base_bdevs_discovered": 3, 00:13:05.470 "num_base_bdevs_operational": 3, 00:13:05.470 "base_bdevs_list": [ 00:13:05.470 { 00:13:05.470 "name": "BaseBdev1", 00:13:05.470 "uuid": "7e25e947-7602-49fd-b715-54e133af867e", 00:13:05.470 "is_configured": true, 00:13:05.470 "data_offset": 2048, 00:13:05.470 "data_size": 63488 00:13:05.470 }, 00:13:05.470 { 00:13:05.470 "name": "BaseBdev2", 00:13:05.470 "uuid": "c5f6233c-8a17-4ebd-b7cf-0c2a6a8b32e6", 00:13:05.470 "is_configured": true, 00:13:05.470 "data_offset": 2048, 00:13:05.470 "data_size": 63488 00:13:05.470 }, 00:13:05.470 { 00:13:05.470 "name": "BaseBdev3", 00:13:05.470 "uuid": "406a7873-427c-4282-90a1-2cf957443a8f", 00:13:05.470 "is_configured": true, 00:13:05.470 "data_offset": 2048, 00:13:05.470 "data_size": 63488 00:13:05.470 } 00:13:05.470 ] 00:13:05.470 }' 00:13:05.470 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.470 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.730 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:05.730 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:05.730 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:05.730 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:05.730 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:05.730 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:05.730 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:05.730 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.730 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.730 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:05.730 [2024-12-14 05:02:16.505455] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:05.730 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.730 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:05.730 "name": "Existed_Raid", 00:13:05.730 "aliases": [ 00:13:05.730 "1c774efc-21f5-4aeb-a699-e38f951abf25" 00:13:05.730 ], 00:13:05.730 "product_name": "Raid Volume", 00:13:05.730 "block_size": 512, 00:13:05.730 "num_blocks": 126976, 00:13:05.730 "uuid": "1c774efc-21f5-4aeb-a699-e38f951abf25", 00:13:05.730 "assigned_rate_limits": { 00:13:05.730 "rw_ios_per_sec": 0, 00:13:05.730 "rw_mbytes_per_sec": 0, 00:13:05.730 "r_mbytes_per_sec": 0, 00:13:05.730 "w_mbytes_per_sec": 0 00:13:05.730 }, 00:13:05.730 "claimed": false, 00:13:05.730 "zoned": false, 00:13:05.730 "supported_io_types": { 00:13:05.730 "read": true, 00:13:05.730 "write": true, 00:13:05.730 "unmap": false, 00:13:05.730 "flush": false, 00:13:05.730 "reset": true, 00:13:05.730 "nvme_admin": false, 00:13:05.730 "nvme_io": false, 00:13:05.730 "nvme_io_md": false, 00:13:05.730 "write_zeroes": true, 00:13:05.730 "zcopy": false, 00:13:05.730 "get_zone_info": false, 00:13:05.730 "zone_management": false, 00:13:05.730 "zone_append": false, 00:13:05.730 "compare": false, 00:13:05.730 "compare_and_write": false, 00:13:05.730 "abort": false, 00:13:05.730 "seek_hole": false, 00:13:05.730 "seek_data": false, 00:13:05.730 "copy": false, 00:13:05.730 "nvme_iov_md": false 00:13:05.730 }, 00:13:05.730 "driver_specific": { 00:13:05.730 "raid": { 00:13:05.730 "uuid": "1c774efc-21f5-4aeb-a699-e38f951abf25", 00:13:05.730 "strip_size_kb": 64, 00:13:05.730 "state": "online", 00:13:05.730 "raid_level": "raid5f", 00:13:05.730 "superblock": true, 00:13:05.730 "num_base_bdevs": 3, 00:13:05.730 "num_base_bdevs_discovered": 3, 00:13:05.730 "num_base_bdevs_operational": 3, 00:13:05.730 "base_bdevs_list": [ 00:13:05.730 { 00:13:05.730 "name": "BaseBdev1", 00:13:05.730 "uuid": "7e25e947-7602-49fd-b715-54e133af867e", 00:13:05.730 "is_configured": true, 00:13:05.730 "data_offset": 2048, 00:13:05.730 "data_size": 63488 00:13:05.730 }, 00:13:05.730 { 00:13:05.730 "name": "BaseBdev2", 00:13:05.730 "uuid": "c5f6233c-8a17-4ebd-b7cf-0c2a6a8b32e6", 00:13:05.730 "is_configured": true, 00:13:05.730 "data_offset": 2048, 00:13:05.730 "data_size": 63488 00:13:05.730 }, 00:13:05.730 { 00:13:05.730 "name": "BaseBdev3", 00:13:05.730 "uuid": "406a7873-427c-4282-90a1-2cf957443a8f", 00:13:05.730 "is_configured": true, 00:13:05.730 "data_offset": 2048, 00:13:05.730 "data_size": 63488 00:13:05.730 } 00:13:05.730 ] 00:13:05.730 } 00:13:05.730 } 00:13:05.730 }' 00:13:05.730 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:05.730 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:05.730 BaseBdev2 00:13:05.730 BaseBdev3' 00:13:05.730 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.990 [2024-12-14 05:02:16.764892] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.990 "name": "Existed_Raid", 00:13:05.990 "uuid": "1c774efc-21f5-4aeb-a699-e38f951abf25", 00:13:05.990 "strip_size_kb": 64, 00:13:05.990 "state": "online", 00:13:05.990 "raid_level": "raid5f", 00:13:05.990 "superblock": true, 00:13:05.990 "num_base_bdevs": 3, 00:13:05.990 "num_base_bdevs_discovered": 2, 00:13:05.990 "num_base_bdevs_operational": 2, 00:13:05.990 "base_bdevs_list": [ 00:13:05.990 { 00:13:05.990 "name": null, 00:13:05.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.990 "is_configured": false, 00:13:05.990 "data_offset": 0, 00:13:05.990 "data_size": 63488 00:13:05.990 }, 00:13:05.990 { 00:13:05.990 "name": "BaseBdev2", 00:13:05.990 "uuid": "c5f6233c-8a17-4ebd-b7cf-0c2a6a8b32e6", 00:13:05.990 "is_configured": true, 00:13:05.990 "data_offset": 2048, 00:13:05.990 "data_size": 63488 00:13:05.990 }, 00:13:05.990 { 00:13:05.990 "name": "BaseBdev3", 00:13:05.990 "uuid": "406a7873-427c-4282-90a1-2cf957443a8f", 00:13:05.990 "is_configured": true, 00:13:05.990 "data_offset": 2048, 00:13:05.990 "data_size": 63488 00:13:05.990 } 00:13:05.990 ] 00:13:05.990 }' 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.990 05:02:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.577 [2024-12-14 05:02:17.315233] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:06.577 [2024-12-14 05:02:17.315430] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:06.577 [2024-12-14 05:02:17.326751] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.577 [2024-12-14 05:02:17.374695] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:06.577 [2024-12-14 05:02:17.374800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.577 BaseBdev2 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.577 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.837 [ 00:13:06.837 { 00:13:06.837 "name": "BaseBdev2", 00:13:06.837 "aliases": [ 00:13:06.837 "1e36e0f4-6633-4598-bfbf-1b0ed4c21667" 00:13:06.837 ], 00:13:06.837 "product_name": "Malloc disk", 00:13:06.837 "block_size": 512, 00:13:06.837 "num_blocks": 65536, 00:13:06.837 "uuid": "1e36e0f4-6633-4598-bfbf-1b0ed4c21667", 00:13:06.837 "assigned_rate_limits": { 00:13:06.837 "rw_ios_per_sec": 0, 00:13:06.837 "rw_mbytes_per_sec": 0, 00:13:06.837 "r_mbytes_per_sec": 0, 00:13:06.837 "w_mbytes_per_sec": 0 00:13:06.837 }, 00:13:06.837 "claimed": false, 00:13:06.837 "zoned": false, 00:13:06.837 "supported_io_types": { 00:13:06.837 "read": true, 00:13:06.837 "write": true, 00:13:06.837 "unmap": true, 00:13:06.837 "flush": true, 00:13:06.837 "reset": true, 00:13:06.837 "nvme_admin": false, 00:13:06.837 "nvme_io": false, 00:13:06.837 "nvme_io_md": false, 00:13:06.837 "write_zeroes": true, 00:13:06.837 "zcopy": true, 00:13:06.837 "get_zone_info": false, 00:13:06.837 "zone_management": false, 00:13:06.837 "zone_append": false, 00:13:06.837 "compare": false, 00:13:06.837 "compare_and_write": false, 00:13:06.837 "abort": true, 00:13:06.837 "seek_hole": false, 00:13:06.837 "seek_data": false, 00:13:06.837 "copy": true, 00:13:06.837 "nvme_iov_md": false 00:13:06.837 }, 00:13:06.837 "memory_domains": [ 00:13:06.837 { 00:13:06.837 "dma_device_id": "system", 00:13:06.837 "dma_device_type": 1 00:13:06.837 }, 00:13:06.837 { 00:13:06.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.837 "dma_device_type": 2 00:13:06.837 } 00:13:06.837 ], 00:13:06.837 "driver_specific": {} 00:13:06.837 } 00:13:06.837 ] 00:13:06.837 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.837 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:06.837 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:06.837 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:06.837 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:06.837 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.837 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.837 BaseBdev3 00:13:06.837 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.837 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:06.837 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:06.837 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:06.837 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:06.837 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:06.837 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:06.837 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:06.837 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.837 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.837 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.837 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:06.837 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.838 [ 00:13:06.838 { 00:13:06.838 "name": "BaseBdev3", 00:13:06.838 "aliases": [ 00:13:06.838 "b3ed6b87-92d8-43d1-8248-e967bb5ddb61" 00:13:06.838 ], 00:13:06.838 "product_name": "Malloc disk", 00:13:06.838 "block_size": 512, 00:13:06.838 "num_blocks": 65536, 00:13:06.838 "uuid": "b3ed6b87-92d8-43d1-8248-e967bb5ddb61", 00:13:06.838 "assigned_rate_limits": { 00:13:06.838 "rw_ios_per_sec": 0, 00:13:06.838 "rw_mbytes_per_sec": 0, 00:13:06.838 "r_mbytes_per_sec": 0, 00:13:06.838 "w_mbytes_per_sec": 0 00:13:06.838 }, 00:13:06.838 "claimed": false, 00:13:06.838 "zoned": false, 00:13:06.838 "supported_io_types": { 00:13:06.838 "read": true, 00:13:06.838 "write": true, 00:13:06.838 "unmap": true, 00:13:06.838 "flush": true, 00:13:06.838 "reset": true, 00:13:06.838 "nvme_admin": false, 00:13:06.838 "nvme_io": false, 00:13:06.838 "nvme_io_md": false, 00:13:06.838 "write_zeroes": true, 00:13:06.838 "zcopy": true, 00:13:06.838 "get_zone_info": false, 00:13:06.838 "zone_management": false, 00:13:06.838 "zone_append": false, 00:13:06.838 "compare": false, 00:13:06.838 "compare_and_write": false, 00:13:06.838 "abort": true, 00:13:06.838 "seek_hole": false, 00:13:06.838 "seek_data": false, 00:13:06.838 "copy": true, 00:13:06.838 "nvme_iov_md": false 00:13:06.838 }, 00:13:06.838 "memory_domains": [ 00:13:06.838 { 00:13:06.838 "dma_device_id": "system", 00:13:06.838 "dma_device_type": 1 00:13:06.838 }, 00:13:06.838 { 00:13:06.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.838 "dma_device_type": 2 00:13:06.838 } 00:13:06.838 ], 00:13:06.838 "driver_specific": {} 00:13:06.838 } 00:13:06.838 ] 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.838 [2024-12-14 05:02:17.529774] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:06.838 [2024-12-14 05:02:17.529873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:06.838 [2024-12-14 05:02:17.529914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.838 [2024-12-14 05:02:17.531703] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.838 "name": "Existed_Raid", 00:13:06.838 "uuid": "1f1e7f12-5ea9-49b9-a44e-8b6d198c5d19", 00:13:06.838 "strip_size_kb": 64, 00:13:06.838 "state": "configuring", 00:13:06.838 "raid_level": "raid5f", 00:13:06.838 "superblock": true, 00:13:06.838 "num_base_bdevs": 3, 00:13:06.838 "num_base_bdevs_discovered": 2, 00:13:06.838 "num_base_bdevs_operational": 3, 00:13:06.838 "base_bdevs_list": [ 00:13:06.838 { 00:13:06.838 "name": "BaseBdev1", 00:13:06.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.838 "is_configured": false, 00:13:06.838 "data_offset": 0, 00:13:06.838 "data_size": 0 00:13:06.838 }, 00:13:06.838 { 00:13:06.838 "name": "BaseBdev2", 00:13:06.838 "uuid": "1e36e0f4-6633-4598-bfbf-1b0ed4c21667", 00:13:06.838 "is_configured": true, 00:13:06.838 "data_offset": 2048, 00:13:06.838 "data_size": 63488 00:13:06.838 }, 00:13:06.838 { 00:13:06.838 "name": "BaseBdev3", 00:13:06.838 "uuid": "b3ed6b87-92d8-43d1-8248-e967bb5ddb61", 00:13:06.838 "is_configured": true, 00:13:06.838 "data_offset": 2048, 00:13:06.838 "data_size": 63488 00:13:06.838 } 00:13:06.838 ] 00:13:06.838 }' 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.838 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.098 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:07.098 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.098 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.098 [2024-12-14 05:02:17.976967] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:07.357 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.357 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:07.357 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.357 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.357 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:07.357 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.357 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:07.357 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.357 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.357 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.357 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.357 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.357 05:02:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.357 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.357 05:02:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.357 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.357 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.357 "name": "Existed_Raid", 00:13:07.357 "uuid": "1f1e7f12-5ea9-49b9-a44e-8b6d198c5d19", 00:13:07.357 "strip_size_kb": 64, 00:13:07.357 "state": "configuring", 00:13:07.357 "raid_level": "raid5f", 00:13:07.357 "superblock": true, 00:13:07.357 "num_base_bdevs": 3, 00:13:07.357 "num_base_bdevs_discovered": 1, 00:13:07.357 "num_base_bdevs_operational": 3, 00:13:07.357 "base_bdevs_list": [ 00:13:07.357 { 00:13:07.357 "name": "BaseBdev1", 00:13:07.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.357 "is_configured": false, 00:13:07.357 "data_offset": 0, 00:13:07.357 "data_size": 0 00:13:07.357 }, 00:13:07.357 { 00:13:07.357 "name": null, 00:13:07.357 "uuid": "1e36e0f4-6633-4598-bfbf-1b0ed4c21667", 00:13:07.357 "is_configured": false, 00:13:07.357 "data_offset": 0, 00:13:07.357 "data_size": 63488 00:13:07.357 }, 00:13:07.357 { 00:13:07.357 "name": "BaseBdev3", 00:13:07.357 "uuid": "b3ed6b87-92d8-43d1-8248-e967bb5ddb61", 00:13:07.357 "is_configured": true, 00:13:07.357 "data_offset": 2048, 00:13:07.357 "data_size": 63488 00:13:07.357 } 00:13:07.357 ] 00:13:07.357 }' 00:13:07.357 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.357 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.616 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.616 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.616 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.616 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:07.616 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.616 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:07.616 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:07.616 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.616 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.616 [2024-12-14 05:02:18.483044] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:07.616 BaseBdev1 00:13:07.616 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.616 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:07.616 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:07.616 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:07.616 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:07.616 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:07.616 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:07.616 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:07.616 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.616 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.616 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.617 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:07.617 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.617 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.876 [ 00:13:07.876 { 00:13:07.876 "name": "BaseBdev1", 00:13:07.876 "aliases": [ 00:13:07.876 "30392084-ea01-4cdb-9d6c-d4d7368ebf7e" 00:13:07.876 ], 00:13:07.876 "product_name": "Malloc disk", 00:13:07.876 "block_size": 512, 00:13:07.876 "num_blocks": 65536, 00:13:07.876 "uuid": "30392084-ea01-4cdb-9d6c-d4d7368ebf7e", 00:13:07.876 "assigned_rate_limits": { 00:13:07.876 "rw_ios_per_sec": 0, 00:13:07.876 "rw_mbytes_per_sec": 0, 00:13:07.876 "r_mbytes_per_sec": 0, 00:13:07.876 "w_mbytes_per_sec": 0 00:13:07.876 }, 00:13:07.876 "claimed": true, 00:13:07.876 "claim_type": "exclusive_write", 00:13:07.876 "zoned": false, 00:13:07.876 "supported_io_types": { 00:13:07.876 "read": true, 00:13:07.876 "write": true, 00:13:07.876 "unmap": true, 00:13:07.876 "flush": true, 00:13:07.876 "reset": true, 00:13:07.876 "nvme_admin": false, 00:13:07.876 "nvme_io": false, 00:13:07.876 "nvme_io_md": false, 00:13:07.876 "write_zeroes": true, 00:13:07.876 "zcopy": true, 00:13:07.876 "get_zone_info": false, 00:13:07.876 "zone_management": false, 00:13:07.876 "zone_append": false, 00:13:07.876 "compare": false, 00:13:07.876 "compare_and_write": false, 00:13:07.876 "abort": true, 00:13:07.876 "seek_hole": false, 00:13:07.876 "seek_data": false, 00:13:07.876 "copy": true, 00:13:07.876 "nvme_iov_md": false 00:13:07.876 }, 00:13:07.876 "memory_domains": [ 00:13:07.876 { 00:13:07.876 "dma_device_id": "system", 00:13:07.876 "dma_device_type": 1 00:13:07.876 }, 00:13:07.876 { 00:13:07.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.876 "dma_device_type": 2 00:13:07.876 } 00:13:07.876 ], 00:13:07.876 "driver_specific": {} 00:13:07.876 } 00:13:07.876 ] 00:13:07.876 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.876 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:07.876 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:07.876 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.876 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.876 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:07.876 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.876 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:07.876 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.876 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.876 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.876 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.876 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.876 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.876 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.876 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.876 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.876 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.876 "name": "Existed_Raid", 00:13:07.876 "uuid": "1f1e7f12-5ea9-49b9-a44e-8b6d198c5d19", 00:13:07.876 "strip_size_kb": 64, 00:13:07.876 "state": "configuring", 00:13:07.877 "raid_level": "raid5f", 00:13:07.877 "superblock": true, 00:13:07.877 "num_base_bdevs": 3, 00:13:07.877 "num_base_bdevs_discovered": 2, 00:13:07.877 "num_base_bdevs_operational": 3, 00:13:07.877 "base_bdevs_list": [ 00:13:07.877 { 00:13:07.877 "name": "BaseBdev1", 00:13:07.877 "uuid": "30392084-ea01-4cdb-9d6c-d4d7368ebf7e", 00:13:07.877 "is_configured": true, 00:13:07.877 "data_offset": 2048, 00:13:07.877 "data_size": 63488 00:13:07.877 }, 00:13:07.877 { 00:13:07.877 "name": null, 00:13:07.877 "uuid": "1e36e0f4-6633-4598-bfbf-1b0ed4c21667", 00:13:07.877 "is_configured": false, 00:13:07.877 "data_offset": 0, 00:13:07.877 "data_size": 63488 00:13:07.877 }, 00:13:07.877 { 00:13:07.877 "name": "BaseBdev3", 00:13:07.877 "uuid": "b3ed6b87-92d8-43d1-8248-e967bb5ddb61", 00:13:07.877 "is_configured": true, 00:13:07.877 "data_offset": 2048, 00:13:07.877 "data_size": 63488 00:13:07.877 } 00:13:07.877 ] 00:13:07.877 }' 00:13:07.877 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.877 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.136 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.136 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:08.136 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.136 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.136 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.136 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:08.136 05:02:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:08.136 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.136 05:02:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.136 [2024-12-14 05:02:18.998205] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:08.136 05:02:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.136 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:08.136 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:08.136 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:08.136 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:08.136 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.136 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:08.136 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.136 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.136 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.136 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.136 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.136 05:02:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.136 05:02:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.136 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.395 05:02:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.395 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.395 "name": "Existed_Raid", 00:13:08.395 "uuid": "1f1e7f12-5ea9-49b9-a44e-8b6d198c5d19", 00:13:08.395 "strip_size_kb": 64, 00:13:08.395 "state": "configuring", 00:13:08.395 "raid_level": "raid5f", 00:13:08.395 "superblock": true, 00:13:08.395 "num_base_bdevs": 3, 00:13:08.395 "num_base_bdevs_discovered": 1, 00:13:08.395 "num_base_bdevs_operational": 3, 00:13:08.395 "base_bdevs_list": [ 00:13:08.395 { 00:13:08.395 "name": "BaseBdev1", 00:13:08.395 "uuid": "30392084-ea01-4cdb-9d6c-d4d7368ebf7e", 00:13:08.395 "is_configured": true, 00:13:08.395 "data_offset": 2048, 00:13:08.395 "data_size": 63488 00:13:08.395 }, 00:13:08.395 { 00:13:08.395 "name": null, 00:13:08.395 "uuid": "1e36e0f4-6633-4598-bfbf-1b0ed4c21667", 00:13:08.395 "is_configured": false, 00:13:08.395 "data_offset": 0, 00:13:08.395 "data_size": 63488 00:13:08.395 }, 00:13:08.395 { 00:13:08.395 "name": null, 00:13:08.395 "uuid": "b3ed6b87-92d8-43d1-8248-e967bb5ddb61", 00:13:08.395 "is_configured": false, 00:13:08.395 "data_offset": 0, 00:13:08.395 "data_size": 63488 00:13:08.395 } 00:13:08.395 ] 00:13:08.395 }' 00:13:08.395 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.395 05:02:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.655 [2024-12-14 05:02:19.501375] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.655 05:02:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.914 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.914 "name": "Existed_Raid", 00:13:08.914 "uuid": "1f1e7f12-5ea9-49b9-a44e-8b6d198c5d19", 00:13:08.914 "strip_size_kb": 64, 00:13:08.914 "state": "configuring", 00:13:08.914 "raid_level": "raid5f", 00:13:08.914 "superblock": true, 00:13:08.914 "num_base_bdevs": 3, 00:13:08.914 "num_base_bdevs_discovered": 2, 00:13:08.914 "num_base_bdevs_operational": 3, 00:13:08.914 "base_bdevs_list": [ 00:13:08.914 { 00:13:08.914 "name": "BaseBdev1", 00:13:08.914 "uuid": "30392084-ea01-4cdb-9d6c-d4d7368ebf7e", 00:13:08.914 "is_configured": true, 00:13:08.914 "data_offset": 2048, 00:13:08.914 "data_size": 63488 00:13:08.914 }, 00:13:08.914 { 00:13:08.914 "name": null, 00:13:08.914 "uuid": "1e36e0f4-6633-4598-bfbf-1b0ed4c21667", 00:13:08.914 "is_configured": false, 00:13:08.914 "data_offset": 0, 00:13:08.914 "data_size": 63488 00:13:08.914 }, 00:13:08.914 { 00:13:08.914 "name": "BaseBdev3", 00:13:08.914 "uuid": "b3ed6b87-92d8-43d1-8248-e967bb5ddb61", 00:13:08.914 "is_configured": true, 00:13:08.914 "data_offset": 2048, 00:13:08.914 "data_size": 63488 00:13:08.914 } 00:13:08.914 ] 00:13:08.914 }' 00:13:08.914 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.914 05:02:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.174 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.175 05:02:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.175 05:02:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:09.175 05:02:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.175 05:02:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.175 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:09.175 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:09.175 05:02:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.175 05:02:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.175 [2024-12-14 05:02:20.016514] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:09.175 05:02:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.175 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:09.175 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.175 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.175 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:09.175 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.175 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:09.175 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.175 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.175 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.175 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.175 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.175 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.175 05:02:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.175 05:02:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.175 05:02:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.433 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.433 "name": "Existed_Raid", 00:13:09.433 "uuid": "1f1e7f12-5ea9-49b9-a44e-8b6d198c5d19", 00:13:09.433 "strip_size_kb": 64, 00:13:09.433 "state": "configuring", 00:13:09.433 "raid_level": "raid5f", 00:13:09.433 "superblock": true, 00:13:09.433 "num_base_bdevs": 3, 00:13:09.434 "num_base_bdevs_discovered": 1, 00:13:09.434 "num_base_bdevs_operational": 3, 00:13:09.434 "base_bdevs_list": [ 00:13:09.434 { 00:13:09.434 "name": null, 00:13:09.434 "uuid": "30392084-ea01-4cdb-9d6c-d4d7368ebf7e", 00:13:09.434 "is_configured": false, 00:13:09.434 "data_offset": 0, 00:13:09.434 "data_size": 63488 00:13:09.434 }, 00:13:09.434 { 00:13:09.434 "name": null, 00:13:09.434 "uuid": "1e36e0f4-6633-4598-bfbf-1b0ed4c21667", 00:13:09.434 "is_configured": false, 00:13:09.434 "data_offset": 0, 00:13:09.434 "data_size": 63488 00:13:09.434 }, 00:13:09.434 { 00:13:09.434 "name": "BaseBdev3", 00:13:09.434 "uuid": "b3ed6b87-92d8-43d1-8248-e967bb5ddb61", 00:13:09.434 "is_configured": true, 00:13:09.434 "data_offset": 2048, 00:13:09.434 "data_size": 63488 00:13:09.434 } 00:13:09.434 ] 00:13:09.434 }' 00:13:09.434 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.434 05:02:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.693 [2024-12-14 05:02:20.530095] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.693 05:02:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.953 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.953 "name": "Existed_Raid", 00:13:09.953 "uuid": "1f1e7f12-5ea9-49b9-a44e-8b6d198c5d19", 00:13:09.953 "strip_size_kb": 64, 00:13:09.953 "state": "configuring", 00:13:09.953 "raid_level": "raid5f", 00:13:09.953 "superblock": true, 00:13:09.953 "num_base_bdevs": 3, 00:13:09.953 "num_base_bdevs_discovered": 2, 00:13:09.953 "num_base_bdevs_operational": 3, 00:13:09.953 "base_bdevs_list": [ 00:13:09.953 { 00:13:09.953 "name": null, 00:13:09.953 "uuid": "30392084-ea01-4cdb-9d6c-d4d7368ebf7e", 00:13:09.953 "is_configured": false, 00:13:09.953 "data_offset": 0, 00:13:09.953 "data_size": 63488 00:13:09.953 }, 00:13:09.953 { 00:13:09.953 "name": "BaseBdev2", 00:13:09.953 "uuid": "1e36e0f4-6633-4598-bfbf-1b0ed4c21667", 00:13:09.953 "is_configured": true, 00:13:09.953 "data_offset": 2048, 00:13:09.953 "data_size": 63488 00:13:09.953 }, 00:13:09.953 { 00:13:09.953 "name": "BaseBdev3", 00:13:09.953 "uuid": "b3ed6b87-92d8-43d1-8248-e967bb5ddb61", 00:13:09.953 "is_configured": true, 00:13:09.953 "data_offset": 2048, 00:13:09.953 "data_size": 63488 00:13:09.953 } 00:13:09.953 ] 00:13:09.953 }' 00:13:09.953 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.953 05:02:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.213 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.213 05:02:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:10.213 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.213 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.213 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.213 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:10.213 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.213 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:10.213 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.213 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.213 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.213 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 30392084-ea01-4cdb-9d6c-d4d7368ebf7e 00:13:10.213 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.213 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.213 [2024-12-14 05:02:21.084020] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:10.213 [2024-12-14 05:02:21.084261] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:10.213 [2024-12-14 05:02:21.084301] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:10.213 [2024-12-14 05:02:21.084572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:10.213 NewBaseBdev 00:13:10.213 [2024-12-14 05:02:21.085020] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:10.213 [2024-12-14 05:02:21.085037] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:13:10.213 [2024-12-14 05:02:21.085138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.213 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.213 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:10.213 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:10.213 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:10.213 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:10.213 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:10.213 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:10.213 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:10.213 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.213 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.472 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.472 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:10.472 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.472 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.472 [ 00:13:10.472 { 00:13:10.472 "name": "NewBaseBdev", 00:13:10.472 "aliases": [ 00:13:10.472 "30392084-ea01-4cdb-9d6c-d4d7368ebf7e" 00:13:10.472 ], 00:13:10.472 "product_name": "Malloc disk", 00:13:10.472 "block_size": 512, 00:13:10.472 "num_blocks": 65536, 00:13:10.472 "uuid": "30392084-ea01-4cdb-9d6c-d4d7368ebf7e", 00:13:10.472 "assigned_rate_limits": { 00:13:10.472 "rw_ios_per_sec": 0, 00:13:10.472 "rw_mbytes_per_sec": 0, 00:13:10.472 "r_mbytes_per_sec": 0, 00:13:10.472 "w_mbytes_per_sec": 0 00:13:10.472 }, 00:13:10.472 "claimed": true, 00:13:10.472 "claim_type": "exclusive_write", 00:13:10.472 "zoned": false, 00:13:10.472 "supported_io_types": { 00:13:10.472 "read": true, 00:13:10.473 "write": true, 00:13:10.473 "unmap": true, 00:13:10.473 "flush": true, 00:13:10.473 "reset": true, 00:13:10.473 "nvme_admin": false, 00:13:10.473 "nvme_io": false, 00:13:10.473 "nvme_io_md": false, 00:13:10.473 "write_zeroes": true, 00:13:10.473 "zcopy": true, 00:13:10.473 "get_zone_info": false, 00:13:10.473 "zone_management": false, 00:13:10.473 "zone_append": false, 00:13:10.473 "compare": false, 00:13:10.473 "compare_and_write": false, 00:13:10.473 "abort": true, 00:13:10.473 "seek_hole": false, 00:13:10.473 "seek_data": false, 00:13:10.473 "copy": true, 00:13:10.473 "nvme_iov_md": false 00:13:10.473 }, 00:13:10.473 "memory_domains": [ 00:13:10.473 { 00:13:10.473 "dma_device_id": "system", 00:13:10.473 "dma_device_type": 1 00:13:10.473 }, 00:13:10.473 { 00:13:10.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.473 "dma_device_type": 2 00:13:10.473 } 00:13:10.473 ], 00:13:10.473 "driver_specific": {} 00:13:10.473 } 00:13:10.473 ] 00:13:10.473 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.473 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:10.473 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:10.473 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.473 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.473 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:10.473 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.473 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.473 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.473 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.473 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.473 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.473 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.473 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.473 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.473 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.473 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.473 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.473 "name": "Existed_Raid", 00:13:10.473 "uuid": "1f1e7f12-5ea9-49b9-a44e-8b6d198c5d19", 00:13:10.473 "strip_size_kb": 64, 00:13:10.473 "state": "online", 00:13:10.473 "raid_level": "raid5f", 00:13:10.473 "superblock": true, 00:13:10.473 "num_base_bdevs": 3, 00:13:10.473 "num_base_bdevs_discovered": 3, 00:13:10.473 "num_base_bdevs_operational": 3, 00:13:10.473 "base_bdevs_list": [ 00:13:10.473 { 00:13:10.473 "name": "NewBaseBdev", 00:13:10.473 "uuid": "30392084-ea01-4cdb-9d6c-d4d7368ebf7e", 00:13:10.473 "is_configured": true, 00:13:10.473 "data_offset": 2048, 00:13:10.473 "data_size": 63488 00:13:10.473 }, 00:13:10.473 { 00:13:10.473 "name": "BaseBdev2", 00:13:10.473 "uuid": "1e36e0f4-6633-4598-bfbf-1b0ed4c21667", 00:13:10.473 "is_configured": true, 00:13:10.473 "data_offset": 2048, 00:13:10.473 "data_size": 63488 00:13:10.473 }, 00:13:10.473 { 00:13:10.473 "name": "BaseBdev3", 00:13:10.473 "uuid": "b3ed6b87-92d8-43d1-8248-e967bb5ddb61", 00:13:10.473 "is_configured": true, 00:13:10.473 "data_offset": 2048, 00:13:10.473 "data_size": 63488 00:13:10.473 } 00:13:10.473 ] 00:13:10.473 }' 00:13:10.473 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.473 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.732 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:10.732 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:10.732 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:10.732 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:10.732 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:10.732 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:10.732 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:10.732 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:10.732 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.732 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.732 [2024-12-14 05:02:21.583519] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:10.732 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.993 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:10.993 "name": "Existed_Raid", 00:13:10.993 "aliases": [ 00:13:10.993 "1f1e7f12-5ea9-49b9-a44e-8b6d198c5d19" 00:13:10.993 ], 00:13:10.993 "product_name": "Raid Volume", 00:13:10.993 "block_size": 512, 00:13:10.993 "num_blocks": 126976, 00:13:10.993 "uuid": "1f1e7f12-5ea9-49b9-a44e-8b6d198c5d19", 00:13:10.994 "assigned_rate_limits": { 00:13:10.994 "rw_ios_per_sec": 0, 00:13:10.994 "rw_mbytes_per_sec": 0, 00:13:10.994 "r_mbytes_per_sec": 0, 00:13:10.994 "w_mbytes_per_sec": 0 00:13:10.994 }, 00:13:10.994 "claimed": false, 00:13:10.994 "zoned": false, 00:13:10.994 "supported_io_types": { 00:13:10.994 "read": true, 00:13:10.994 "write": true, 00:13:10.994 "unmap": false, 00:13:10.994 "flush": false, 00:13:10.994 "reset": true, 00:13:10.994 "nvme_admin": false, 00:13:10.994 "nvme_io": false, 00:13:10.994 "nvme_io_md": false, 00:13:10.994 "write_zeroes": true, 00:13:10.994 "zcopy": false, 00:13:10.994 "get_zone_info": false, 00:13:10.994 "zone_management": false, 00:13:10.994 "zone_append": false, 00:13:10.994 "compare": false, 00:13:10.994 "compare_and_write": false, 00:13:10.994 "abort": false, 00:13:10.994 "seek_hole": false, 00:13:10.994 "seek_data": false, 00:13:10.994 "copy": false, 00:13:10.994 "nvme_iov_md": false 00:13:10.994 }, 00:13:10.994 "driver_specific": { 00:13:10.994 "raid": { 00:13:10.994 "uuid": "1f1e7f12-5ea9-49b9-a44e-8b6d198c5d19", 00:13:10.994 "strip_size_kb": 64, 00:13:10.994 "state": "online", 00:13:10.994 "raid_level": "raid5f", 00:13:10.994 "superblock": true, 00:13:10.994 "num_base_bdevs": 3, 00:13:10.994 "num_base_bdevs_discovered": 3, 00:13:10.994 "num_base_bdevs_operational": 3, 00:13:10.994 "base_bdevs_list": [ 00:13:10.994 { 00:13:10.994 "name": "NewBaseBdev", 00:13:10.994 "uuid": "30392084-ea01-4cdb-9d6c-d4d7368ebf7e", 00:13:10.994 "is_configured": true, 00:13:10.994 "data_offset": 2048, 00:13:10.994 "data_size": 63488 00:13:10.994 }, 00:13:10.994 { 00:13:10.994 "name": "BaseBdev2", 00:13:10.994 "uuid": "1e36e0f4-6633-4598-bfbf-1b0ed4c21667", 00:13:10.994 "is_configured": true, 00:13:10.994 "data_offset": 2048, 00:13:10.994 "data_size": 63488 00:13:10.994 }, 00:13:10.994 { 00:13:10.994 "name": "BaseBdev3", 00:13:10.994 "uuid": "b3ed6b87-92d8-43d1-8248-e967bb5ddb61", 00:13:10.994 "is_configured": true, 00:13:10.994 "data_offset": 2048, 00:13:10.994 "data_size": 63488 00:13:10.994 } 00:13:10.994 ] 00:13:10.994 } 00:13:10.994 } 00:13:10.994 }' 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:10.994 BaseBdev2 00:13:10.994 BaseBdev3' 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.994 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.254 [2024-12-14 05:02:21.874846] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:11.254 [2024-12-14 05:02:21.874908] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.254 [2024-12-14 05:02:21.874995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.254 [2024-12-14 05:02:21.875267] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.254 [2024-12-14 05:02:21.875347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:11.254 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.254 05:02:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 91065 00:13:11.254 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 91065 ']' 00:13:11.254 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 91065 00:13:11.254 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:11.254 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:11.254 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91065 00:13:11.254 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:11.254 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:11.254 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91065' 00:13:11.254 killing process with pid 91065 00:13:11.254 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 91065 00:13:11.254 [2024-12-14 05:02:21.923484] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:11.254 05:02:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 91065 00:13:11.254 [2024-12-14 05:02:21.954269] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:11.514 05:02:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:11.514 00:13:11.514 real 0m9.059s 00:13:11.514 user 0m15.429s 00:13:11.514 sys 0m1.934s 00:13:11.514 ************************************ 00:13:11.514 END TEST raid5f_state_function_test_sb 00:13:11.514 ************************************ 00:13:11.514 05:02:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.514 05:02:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.514 05:02:22 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:13:11.514 05:02:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:11.514 05:02:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.514 05:02:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:11.514 ************************************ 00:13:11.514 START TEST raid5f_superblock_test 00:13:11.514 ************************************ 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=91669 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 91669 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 91669 ']' 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:11.514 05:02:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.514 [2024-12-14 05:02:22.372086] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:11.514 [2024-12-14 05:02:22.372219] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91669 ] 00:13:11.774 [2024-12-14 05:02:22.533560] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.774 [2024-12-14 05:02:22.580505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.774 [2024-12-14 05:02:22.622989] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.774 [2024-12-14 05:02:22.623028] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.344 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:12.344 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:12.344 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:12.344 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:12.344 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:12.344 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:12.344 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:12.344 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:12.344 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:12.344 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:12.344 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:12.344 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.344 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.344 malloc1 00:13:12.344 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.344 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:12.344 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.344 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.604 [2024-12-14 05:02:23.225547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:12.604 [2024-12-14 05:02:23.225659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.604 [2024-12-14 05:02:23.225700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:12.604 [2024-12-14 05:02:23.225734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.604 [2024-12-14 05:02:23.227806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.604 [2024-12-14 05:02:23.227898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:12.604 pt1 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.604 malloc2 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.604 [2024-12-14 05:02:23.273512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:12.604 [2024-12-14 05:02:23.273789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.604 [2024-12-14 05:02:23.273853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:12.604 [2024-12-14 05:02:23.273891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.604 [2024-12-14 05:02:23.278645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.604 [2024-12-14 05:02:23.278719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:12.604 pt2 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.604 malloc3 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.604 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.604 [2024-12-14 05:02:23.304863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:12.604 [2024-12-14 05:02:23.304971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.604 [2024-12-14 05:02:23.305007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:12.604 [2024-12-14 05:02:23.305035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.604 [2024-12-14 05:02:23.307035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.604 [2024-12-14 05:02:23.307126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:12.604 pt3 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.605 [2024-12-14 05:02:23.316895] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:12.605 [2024-12-14 05:02:23.318702] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:12.605 [2024-12-14 05:02:23.318818] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:12.605 [2024-12-14 05:02:23.318992] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:12.605 [2024-12-14 05:02:23.319041] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:12.605 [2024-12-14 05:02:23.319333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:12.605 [2024-12-14 05:02:23.319794] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:12.605 [2024-12-14 05:02:23.319847] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:12.605 [2024-12-14 05:02:23.320016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.605 "name": "raid_bdev1", 00:13:12.605 "uuid": "0f696552-af95-43de-adf0-41445fd2fe90", 00:13:12.605 "strip_size_kb": 64, 00:13:12.605 "state": "online", 00:13:12.605 "raid_level": "raid5f", 00:13:12.605 "superblock": true, 00:13:12.605 "num_base_bdevs": 3, 00:13:12.605 "num_base_bdevs_discovered": 3, 00:13:12.605 "num_base_bdevs_operational": 3, 00:13:12.605 "base_bdevs_list": [ 00:13:12.605 { 00:13:12.605 "name": "pt1", 00:13:12.605 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:12.605 "is_configured": true, 00:13:12.605 "data_offset": 2048, 00:13:12.605 "data_size": 63488 00:13:12.605 }, 00:13:12.605 { 00:13:12.605 "name": "pt2", 00:13:12.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:12.605 "is_configured": true, 00:13:12.605 "data_offset": 2048, 00:13:12.605 "data_size": 63488 00:13:12.605 }, 00:13:12.605 { 00:13:12.605 "name": "pt3", 00:13:12.605 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:12.605 "is_configured": true, 00:13:12.605 "data_offset": 2048, 00:13:12.605 "data_size": 63488 00:13:12.605 } 00:13:12.605 ] 00:13:12.605 }' 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.605 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.174 [2024-12-14 05:02:23.764826] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:13.174 "name": "raid_bdev1", 00:13:13.174 "aliases": [ 00:13:13.174 "0f696552-af95-43de-adf0-41445fd2fe90" 00:13:13.174 ], 00:13:13.174 "product_name": "Raid Volume", 00:13:13.174 "block_size": 512, 00:13:13.174 "num_blocks": 126976, 00:13:13.174 "uuid": "0f696552-af95-43de-adf0-41445fd2fe90", 00:13:13.174 "assigned_rate_limits": { 00:13:13.174 "rw_ios_per_sec": 0, 00:13:13.174 "rw_mbytes_per_sec": 0, 00:13:13.174 "r_mbytes_per_sec": 0, 00:13:13.174 "w_mbytes_per_sec": 0 00:13:13.174 }, 00:13:13.174 "claimed": false, 00:13:13.174 "zoned": false, 00:13:13.174 "supported_io_types": { 00:13:13.174 "read": true, 00:13:13.174 "write": true, 00:13:13.174 "unmap": false, 00:13:13.174 "flush": false, 00:13:13.174 "reset": true, 00:13:13.174 "nvme_admin": false, 00:13:13.174 "nvme_io": false, 00:13:13.174 "nvme_io_md": false, 00:13:13.174 "write_zeroes": true, 00:13:13.174 "zcopy": false, 00:13:13.174 "get_zone_info": false, 00:13:13.174 "zone_management": false, 00:13:13.174 "zone_append": false, 00:13:13.174 "compare": false, 00:13:13.174 "compare_and_write": false, 00:13:13.174 "abort": false, 00:13:13.174 "seek_hole": false, 00:13:13.174 "seek_data": false, 00:13:13.174 "copy": false, 00:13:13.174 "nvme_iov_md": false 00:13:13.174 }, 00:13:13.174 "driver_specific": { 00:13:13.174 "raid": { 00:13:13.174 "uuid": "0f696552-af95-43de-adf0-41445fd2fe90", 00:13:13.174 "strip_size_kb": 64, 00:13:13.174 "state": "online", 00:13:13.174 "raid_level": "raid5f", 00:13:13.174 "superblock": true, 00:13:13.174 "num_base_bdevs": 3, 00:13:13.174 "num_base_bdevs_discovered": 3, 00:13:13.174 "num_base_bdevs_operational": 3, 00:13:13.174 "base_bdevs_list": [ 00:13:13.174 { 00:13:13.174 "name": "pt1", 00:13:13.174 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:13.174 "is_configured": true, 00:13:13.174 "data_offset": 2048, 00:13:13.174 "data_size": 63488 00:13:13.174 }, 00:13:13.174 { 00:13:13.174 "name": "pt2", 00:13:13.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.174 "is_configured": true, 00:13:13.174 "data_offset": 2048, 00:13:13.174 "data_size": 63488 00:13:13.174 }, 00:13:13.174 { 00:13:13.174 "name": "pt3", 00:13:13.174 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.174 "is_configured": true, 00:13:13.174 "data_offset": 2048, 00:13:13.174 "data_size": 63488 00:13:13.174 } 00:13:13.174 ] 00:13:13.174 } 00:13:13.174 } 00:13:13.174 }' 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:13.174 pt2 00:13:13.174 pt3' 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.174 05:02:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.174 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.174 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.174 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.174 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:13.174 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.174 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.174 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:13.174 [2024-12-14 05:02:24.048320] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:13.434 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.434 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0f696552-af95-43de-adf0-41445fd2fe90 00:13:13.434 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0f696552-af95-43de-adf0-41445fd2fe90 ']' 00:13:13.434 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:13.434 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.434 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.434 [2024-12-14 05:02:24.092080] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:13.434 [2024-12-14 05:02:24.092144] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:13.434 [2024-12-14 05:02:24.092255] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.434 [2024-12-14 05:02:24.092348] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:13.434 [2024-12-14 05:02:24.092408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:13.434 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.434 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.434 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:13.434 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.434 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.434 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.434 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:13.434 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:13.434 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:13.434 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:13.434 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.434 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.434 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.434 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.435 [2024-12-14 05:02:24.251814] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:13.435 [2024-12-14 05:02:24.253642] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:13.435 [2024-12-14 05:02:24.253684] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:13.435 [2024-12-14 05:02:24.253730] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:13.435 [2024-12-14 05:02:24.253767] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:13.435 [2024-12-14 05:02:24.253784] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:13.435 [2024-12-14 05:02:24.253796] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:13.435 [2024-12-14 05:02:24.253807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:13:13.435 request: 00:13:13.435 { 00:13:13.435 "name": "raid_bdev1", 00:13:13.435 "raid_level": "raid5f", 00:13:13.435 "base_bdevs": [ 00:13:13.435 "malloc1", 00:13:13.435 "malloc2", 00:13:13.435 "malloc3" 00:13:13.435 ], 00:13:13.435 "strip_size_kb": 64, 00:13:13.435 "superblock": false, 00:13:13.435 "method": "bdev_raid_create", 00:13:13.435 "req_id": 1 00:13:13.435 } 00:13:13.435 Got JSON-RPC error response 00:13:13.435 response: 00:13:13.435 { 00:13:13.435 "code": -17, 00:13:13.435 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:13.435 } 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:13.435 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:13.694 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:13.694 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.694 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.694 [2024-12-14 05:02:24.319669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:13.694 [2024-12-14 05:02:24.319770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.694 [2024-12-14 05:02:24.319802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:13.695 [2024-12-14 05:02:24.319832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.695 [2024-12-14 05:02:24.321856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.695 [2024-12-14 05:02:24.321942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:13.695 [2024-12-14 05:02:24.322019] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:13.695 [2024-12-14 05:02:24.322077] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:13.695 pt1 00:13:13.695 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.695 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:13.695 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.695 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.695 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:13.695 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.695 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.695 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.695 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.695 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.695 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.695 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.695 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.695 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.695 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.695 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.695 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.695 "name": "raid_bdev1", 00:13:13.695 "uuid": "0f696552-af95-43de-adf0-41445fd2fe90", 00:13:13.695 "strip_size_kb": 64, 00:13:13.695 "state": "configuring", 00:13:13.695 "raid_level": "raid5f", 00:13:13.695 "superblock": true, 00:13:13.695 "num_base_bdevs": 3, 00:13:13.695 "num_base_bdevs_discovered": 1, 00:13:13.695 "num_base_bdevs_operational": 3, 00:13:13.695 "base_bdevs_list": [ 00:13:13.695 { 00:13:13.695 "name": "pt1", 00:13:13.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:13.695 "is_configured": true, 00:13:13.695 "data_offset": 2048, 00:13:13.695 "data_size": 63488 00:13:13.695 }, 00:13:13.695 { 00:13:13.695 "name": null, 00:13:13.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.695 "is_configured": false, 00:13:13.695 "data_offset": 2048, 00:13:13.695 "data_size": 63488 00:13:13.695 }, 00:13:13.695 { 00:13:13.695 "name": null, 00:13:13.695 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.695 "is_configured": false, 00:13:13.695 "data_offset": 2048, 00:13:13.695 "data_size": 63488 00:13:13.695 } 00:13:13.695 ] 00:13:13.695 }' 00:13:13.695 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.695 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.954 [2024-12-14 05:02:24.778910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:13.954 [2024-12-14 05:02:24.778997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.954 [2024-12-14 05:02:24.779030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:13.954 [2024-12-14 05:02:24.779060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.954 [2024-12-14 05:02:24.779436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.954 [2024-12-14 05:02:24.779498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:13.954 [2024-12-14 05:02:24.779590] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:13.954 [2024-12-14 05:02:24.779639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:13.954 pt2 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.954 [2024-12-14 05:02:24.790896] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.954 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.213 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.213 "name": "raid_bdev1", 00:13:14.213 "uuid": "0f696552-af95-43de-adf0-41445fd2fe90", 00:13:14.213 "strip_size_kb": 64, 00:13:14.213 "state": "configuring", 00:13:14.213 "raid_level": "raid5f", 00:13:14.213 "superblock": true, 00:13:14.213 "num_base_bdevs": 3, 00:13:14.213 "num_base_bdevs_discovered": 1, 00:13:14.213 "num_base_bdevs_operational": 3, 00:13:14.213 "base_bdevs_list": [ 00:13:14.213 { 00:13:14.213 "name": "pt1", 00:13:14.213 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:14.213 "is_configured": true, 00:13:14.213 "data_offset": 2048, 00:13:14.213 "data_size": 63488 00:13:14.213 }, 00:13:14.213 { 00:13:14.213 "name": null, 00:13:14.213 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:14.213 "is_configured": false, 00:13:14.213 "data_offset": 0, 00:13:14.213 "data_size": 63488 00:13:14.213 }, 00:13:14.213 { 00:13:14.213 "name": null, 00:13:14.213 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:14.213 "is_configured": false, 00:13:14.213 "data_offset": 2048, 00:13:14.213 "data_size": 63488 00:13:14.213 } 00:13:14.213 ] 00:13:14.213 }' 00:13:14.213 05:02:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.213 05:02:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.471 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:14.471 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:14.471 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:14.471 05:02:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.471 05:02:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.471 [2024-12-14 05:02:25.301988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:14.471 [2024-12-14 05:02:25.302087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.471 [2024-12-14 05:02:25.302118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:14.471 [2024-12-14 05:02:25.302144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.471 [2024-12-14 05:02:25.302492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.471 [2024-12-14 05:02:25.302515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:14.471 [2024-12-14 05:02:25.302571] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:14.471 [2024-12-14 05:02:25.302588] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:14.471 pt2 00:13:14.471 05:02:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.471 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:14.472 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:14.472 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:14.472 05:02:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.472 05:02:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.472 [2024-12-14 05:02:25.313961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:14.472 [2024-12-14 05:02:25.314051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.472 [2024-12-14 05:02:25.314083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:14.472 [2024-12-14 05:02:25.314109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.472 [2024-12-14 05:02:25.314430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.472 [2024-12-14 05:02:25.314485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:14.472 [2024-12-14 05:02:25.314560] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:14.472 [2024-12-14 05:02:25.314603] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:14.472 [2024-12-14 05:02:25.314713] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:14.472 [2024-12-14 05:02:25.314756] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:14.472 [2024-12-14 05:02:25.314979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:14.472 [2024-12-14 05:02:25.315437] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:14.472 [2024-12-14 05:02:25.315491] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:13:14.472 [2024-12-14 05:02:25.315623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.472 pt3 00:13:14.472 05:02:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.472 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:14.472 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:14.472 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:14.472 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.472 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.472 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:14.472 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.472 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.472 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.472 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.472 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.472 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.472 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.472 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.472 05:02:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.472 05:02:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.472 05:02:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.730 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.730 "name": "raid_bdev1", 00:13:14.730 "uuid": "0f696552-af95-43de-adf0-41445fd2fe90", 00:13:14.730 "strip_size_kb": 64, 00:13:14.730 "state": "online", 00:13:14.730 "raid_level": "raid5f", 00:13:14.730 "superblock": true, 00:13:14.730 "num_base_bdevs": 3, 00:13:14.731 "num_base_bdevs_discovered": 3, 00:13:14.731 "num_base_bdevs_operational": 3, 00:13:14.731 "base_bdevs_list": [ 00:13:14.731 { 00:13:14.731 "name": "pt1", 00:13:14.731 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:14.731 "is_configured": true, 00:13:14.731 "data_offset": 2048, 00:13:14.731 "data_size": 63488 00:13:14.731 }, 00:13:14.731 { 00:13:14.731 "name": "pt2", 00:13:14.731 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:14.731 "is_configured": true, 00:13:14.731 "data_offset": 2048, 00:13:14.731 "data_size": 63488 00:13:14.731 }, 00:13:14.731 { 00:13:14.731 "name": "pt3", 00:13:14.731 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:14.731 "is_configured": true, 00:13:14.731 "data_offset": 2048, 00:13:14.731 "data_size": 63488 00:13:14.731 } 00:13:14.731 ] 00:13:14.731 }' 00:13:14.731 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.731 05:02:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.990 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:14.990 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:14.990 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:14.990 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:14.990 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:14.990 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:14.990 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:14.990 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:14.990 05:02:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.990 05:02:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.990 [2024-12-14 05:02:25.785356] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.990 05:02:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.990 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:14.990 "name": "raid_bdev1", 00:13:14.990 "aliases": [ 00:13:14.990 "0f696552-af95-43de-adf0-41445fd2fe90" 00:13:14.990 ], 00:13:14.990 "product_name": "Raid Volume", 00:13:14.990 "block_size": 512, 00:13:14.990 "num_blocks": 126976, 00:13:14.990 "uuid": "0f696552-af95-43de-adf0-41445fd2fe90", 00:13:14.990 "assigned_rate_limits": { 00:13:14.990 "rw_ios_per_sec": 0, 00:13:14.990 "rw_mbytes_per_sec": 0, 00:13:14.990 "r_mbytes_per_sec": 0, 00:13:14.990 "w_mbytes_per_sec": 0 00:13:14.990 }, 00:13:14.990 "claimed": false, 00:13:14.990 "zoned": false, 00:13:14.990 "supported_io_types": { 00:13:14.990 "read": true, 00:13:14.990 "write": true, 00:13:14.990 "unmap": false, 00:13:14.990 "flush": false, 00:13:14.990 "reset": true, 00:13:14.990 "nvme_admin": false, 00:13:14.990 "nvme_io": false, 00:13:14.990 "nvme_io_md": false, 00:13:14.990 "write_zeroes": true, 00:13:14.990 "zcopy": false, 00:13:14.990 "get_zone_info": false, 00:13:14.990 "zone_management": false, 00:13:14.990 "zone_append": false, 00:13:14.990 "compare": false, 00:13:14.990 "compare_and_write": false, 00:13:14.990 "abort": false, 00:13:14.990 "seek_hole": false, 00:13:14.990 "seek_data": false, 00:13:14.990 "copy": false, 00:13:14.990 "nvme_iov_md": false 00:13:14.990 }, 00:13:14.990 "driver_specific": { 00:13:14.990 "raid": { 00:13:14.990 "uuid": "0f696552-af95-43de-adf0-41445fd2fe90", 00:13:14.990 "strip_size_kb": 64, 00:13:14.990 "state": "online", 00:13:14.990 "raid_level": "raid5f", 00:13:14.990 "superblock": true, 00:13:14.990 "num_base_bdevs": 3, 00:13:14.990 "num_base_bdevs_discovered": 3, 00:13:14.990 "num_base_bdevs_operational": 3, 00:13:14.990 "base_bdevs_list": [ 00:13:14.990 { 00:13:14.990 "name": "pt1", 00:13:14.990 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:14.990 "is_configured": true, 00:13:14.990 "data_offset": 2048, 00:13:14.990 "data_size": 63488 00:13:14.990 }, 00:13:14.990 { 00:13:14.990 "name": "pt2", 00:13:14.990 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:14.990 "is_configured": true, 00:13:14.990 "data_offset": 2048, 00:13:14.990 "data_size": 63488 00:13:14.990 }, 00:13:14.990 { 00:13:14.990 "name": "pt3", 00:13:14.990 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:14.990 "is_configured": true, 00:13:14.990 "data_offset": 2048, 00:13:14.990 "data_size": 63488 00:13:14.990 } 00:13:14.990 ] 00:13:14.990 } 00:13:14.990 } 00:13:14.990 }' 00:13:14.990 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:15.249 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:15.249 pt2 00:13:15.249 pt3' 00:13:15.249 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:15.249 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:15.249 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:15.249 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:15.249 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:15.249 05:02:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.249 05:02:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.249 05:02:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.249 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:15.249 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:15.249 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:15.249 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:15.249 05:02:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:15.249 05:02:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.249 05:02:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.249 05:02:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.249 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:15.249 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:15.249 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:15.249 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:15.249 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.249 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.249 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:15.249 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.249 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:15.249 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:15.249 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:15.249 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:15.249 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.249 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.249 [2024-12-14 05:02:26.088799] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:15.249 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.249 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0f696552-af95-43de-adf0-41445fd2fe90 '!=' 0f696552-af95-43de-adf0-41445fd2fe90 ']' 00:13:15.249 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:13:15.249 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:15.249 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:15.249 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:15.249 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.249 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.508 [2024-12-14 05:02:26.132601] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:15.508 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.508 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:15.508 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.508 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.508 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:15.508 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.508 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:15.508 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.508 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.508 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.508 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.508 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.508 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.508 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.508 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.508 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.508 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.508 "name": "raid_bdev1", 00:13:15.508 "uuid": "0f696552-af95-43de-adf0-41445fd2fe90", 00:13:15.508 "strip_size_kb": 64, 00:13:15.508 "state": "online", 00:13:15.508 "raid_level": "raid5f", 00:13:15.508 "superblock": true, 00:13:15.508 "num_base_bdevs": 3, 00:13:15.508 "num_base_bdevs_discovered": 2, 00:13:15.508 "num_base_bdevs_operational": 2, 00:13:15.508 "base_bdevs_list": [ 00:13:15.508 { 00:13:15.508 "name": null, 00:13:15.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.508 "is_configured": false, 00:13:15.508 "data_offset": 0, 00:13:15.508 "data_size": 63488 00:13:15.508 }, 00:13:15.508 { 00:13:15.508 "name": "pt2", 00:13:15.508 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:15.508 "is_configured": true, 00:13:15.508 "data_offset": 2048, 00:13:15.508 "data_size": 63488 00:13:15.508 }, 00:13:15.508 { 00:13:15.508 "name": "pt3", 00:13:15.508 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:15.508 "is_configured": true, 00:13:15.508 "data_offset": 2048, 00:13:15.508 "data_size": 63488 00:13:15.508 } 00:13:15.508 ] 00:13:15.508 }' 00:13:15.508 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.508 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.767 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:15.767 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.767 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.767 [2024-12-14 05:02:26.571817] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:15.767 [2024-12-14 05:02:26.571881] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:15.767 [2024-12-14 05:02:26.571968] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:15.767 [2024-12-14 05:02:26.572033] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:15.767 [2024-12-14 05:02:26.572076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:13:15.767 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.767 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.767 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.767 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.767 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:15.767 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.767 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:15.767 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:15.767 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:15.767 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:15.767 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:15.767 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.767 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.767 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.767 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:15.767 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:15.767 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:15.767 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.767 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.027 [2024-12-14 05:02:26.659671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:16.027 [2024-12-14 05:02:26.659751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.027 [2024-12-14 05:02:26.659784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:16.027 [2024-12-14 05:02:26.659815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.027 [2024-12-14 05:02:26.661822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.027 [2024-12-14 05:02:26.661907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:16.027 [2024-12-14 05:02:26.661989] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:16.027 [2024-12-14 05:02:26.662038] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:16.027 pt2 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.027 "name": "raid_bdev1", 00:13:16.027 "uuid": "0f696552-af95-43de-adf0-41445fd2fe90", 00:13:16.027 "strip_size_kb": 64, 00:13:16.027 "state": "configuring", 00:13:16.027 "raid_level": "raid5f", 00:13:16.027 "superblock": true, 00:13:16.027 "num_base_bdevs": 3, 00:13:16.027 "num_base_bdevs_discovered": 1, 00:13:16.027 "num_base_bdevs_operational": 2, 00:13:16.027 "base_bdevs_list": [ 00:13:16.027 { 00:13:16.027 "name": null, 00:13:16.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.027 "is_configured": false, 00:13:16.027 "data_offset": 2048, 00:13:16.027 "data_size": 63488 00:13:16.027 }, 00:13:16.027 { 00:13:16.027 "name": "pt2", 00:13:16.027 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:16.027 "is_configured": true, 00:13:16.027 "data_offset": 2048, 00:13:16.027 "data_size": 63488 00:13:16.027 }, 00:13:16.027 { 00:13:16.027 "name": null, 00:13:16.027 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:16.027 "is_configured": false, 00:13:16.027 "data_offset": 2048, 00:13:16.027 "data_size": 63488 00:13:16.027 } 00:13:16.027 ] 00:13:16.027 }' 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.027 05:02:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.287 [2024-12-14 05:02:27.082990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:16.287 [2024-12-14 05:02:27.083074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.287 [2024-12-14 05:02:27.083108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:16.287 [2024-12-14 05:02:27.083134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.287 [2024-12-14 05:02:27.083505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.287 [2024-12-14 05:02:27.083562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:16.287 [2024-12-14 05:02:27.083642] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:16.287 [2024-12-14 05:02:27.083697] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:16.287 [2024-12-14 05:02:27.083816] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:16.287 [2024-12-14 05:02:27.083853] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:16.287 [2024-12-14 05:02:27.084091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:16.287 [2024-12-14 05:02:27.084578] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:16.287 [2024-12-14 05:02:27.084636] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:13:16.287 [2024-12-14 05:02:27.084859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.287 pt3 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.287 "name": "raid_bdev1", 00:13:16.287 "uuid": "0f696552-af95-43de-adf0-41445fd2fe90", 00:13:16.287 "strip_size_kb": 64, 00:13:16.287 "state": "online", 00:13:16.287 "raid_level": "raid5f", 00:13:16.287 "superblock": true, 00:13:16.287 "num_base_bdevs": 3, 00:13:16.287 "num_base_bdevs_discovered": 2, 00:13:16.287 "num_base_bdevs_operational": 2, 00:13:16.287 "base_bdevs_list": [ 00:13:16.287 { 00:13:16.287 "name": null, 00:13:16.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.287 "is_configured": false, 00:13:16.287 "data_offset": 2048, 00:13:16.287 "data_size": 63488 00:13:16.287 }, 00:13:16.287 { 00:13:16.287 "name": "pt2", 00:13:16.287 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:16.287 "is_configured": true, 00:13:16.287 "data_offset": 2048, 00:13:16.287 "data_size": 63488 00:13:16.287 }, 00:13:16.287 { 00:13:16.287 "name": "pt3", 00:13:16.287 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:16.287 "is_configured": true, 00:13:16.287 "data_offset": 2048, 00:13:16.287 "data_size": 63488 00:13:16.287 } 00:13:16.287 ] 00:13:16.287 }' 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.287 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.857 [2024-12-14 05:02:27.518229] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:16.857 [2024-12-14 05:02:27.518294] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:16.857 [2024-12-14 05:02:27.518388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:16.857 [2024-12-14 05:02:27.518452] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:16.857 [2024-12-14 05:02:27.518486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.857 [2024-12-14 05:02:27.590096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:16.857 [2024-12-14 05:02:27.590210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.857 [2024-12-14 05:02:27.590242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:16.857 [2024-12-14 05:02:27.590285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.857 [2024-12-14 05:02:27.592484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.857 [2024-12-14 05:02:27.592574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:16.857 [2024-12-14 05:02:27.592682] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:16.857 [2024-12-14 05:02:27.592741] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:16.857 [2024-12-14 05:02:27.592852] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:16.857 [2024-12-14 05:02:27.592921] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:16.857 [2024-12-14 05:02:27.592961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:13:16.857 pt1 00:13:16.857 [2024-12-14 05:02:27.593040] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.857 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:16.858 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.858 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.858 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.858 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.858 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.858 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.858 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.858 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.858 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.858 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.858 "name": "raid_bdev1", 00:13:16.858 "uuid": "0f696552-af95-43de-adf0-41445fd2fe90", 00:13:16.858 "strip_size_kb": 64, 00:13:16.858 "state": "configuring", 00:13:16.858 "raid_level": "raid5f", 00:13:16.858 "superblock": true, 00:13:16.858 "num_base_bdevs": 3, 00:13:16.858 "num_base_bdevs_discovered": 1, 00:13:16.858 "num_base_bdevs_operational": 2, 00:13:16.858 "base_bdevs_list": [ 00:13:16.858 { 00:13:16.858 "name": null, 00:13:16.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.858 "is_configured": false, 00:13:16.858 "data_offset": 2048, 00:13:16.858 "data_size": 63488 00:13:16.858 }, 00:13:16.858 { 00:13:16.858 "name": "pt2", 00:13:16.858 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:16.858 "is_configured": true, 00:13:16.858 "data_offset": 2048, 00:13:16.858 "data_size": 63488 00:13:16.858 }, 00:13:16.858 { 00:13:16.858 "name": null, 00:13:16.858 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:16.858 "is_configured": false, 00:13:16.858 "data_offset": 2048, 00:13:16.858 "data_size": 63488 00:13:16.858 } 00:13:16.858 ] 00:13:16.858 }' 00:13:16.858 05:02:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.858 05:02:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.427 [2024-12-14 05:02:28.033334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:17.427 [2024-12-14 05:02:28.033440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.427 [2024-12-14 05:02:28.033472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:17.427 [2024-12-14 05:02:28.033501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.427 [2024-12-14 05:02:28.033869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.427 [2024-12-14 05:02:28.033934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:17.427 [2024-12-14 05:02:28.034030] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:17.427 [2024-12-14 05:02:28.034082] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:17.427 [2024-12-14 05:02:28.034196] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:17.427 [2024-12-14 05:02:28.034241] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:17.427 [2024-12-14 05:02:28.034474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:17.427 [2024-12-14 05:02:28.034966] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:17.427 [2024-12-14 05:02:28.035015] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:17.427 [2024-12-14 05:02:28.035224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.427 pt3 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.427 "name": "raid_bdev1", 00:13:17.427 "uuid": "0f696552-af95-43de-adf0-41445fd2fe90", 00:13:17.427 "strip_size_kb": 64, 00:13:17.427 "state": "online", 00:13:17.427 "raid_level": "raid5f", 00:13:17.427 "superblock": true, 00:13:17.427 "num_base_bdevs": 3, 00:13:17.427 "num_base_bdevs_discovered": 2, 00:13:17.427 "num_base_bdevs_operational": 2, 00:13:17.427 "base_bdevs_list": [ 00:13:17.427 { 00:13:17.427 "name": null, 00:13:17.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.427 "is_configured": false, 00:13:17.427 "data_offset": 2048, 00:13:17.427 "data_size": 63488 00:13:17.427 }, 00:13:17.427 { 00:13:17.427 "name": "pt2", 00:13:17.427 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:17.427 "is_configured": true, 00:13:17.427 "data_offset": 2048, 00:13:17.427 "data_size": 63488 00:13:17.427 }, 00:13:17.427 { 00:13:17.427 "name": "pt3", 00:13:17.427 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:17.427 "is_configured": true, 00:13:17.427 "data_offset": 2048, 00:13:17.427 "data_size": 63488 00:13:17.427 } 00:13:17.427 ] 00:13:17.427 }' 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.427 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.687 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:17.687 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:17.687 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.687 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.687 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.687 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:17.687 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:17.687 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:17.687 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.687 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.687 [2024-12-14 05:02:28.544641] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:17.687 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.687 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 0f696552-af95-43de-adf0-41445fd2fe90 '!=' 0f696552-af95-43de-adf0-41445fd2fe90 ']' 00:13:17.687 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 91669 00:13:17.687 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 91669 ']' 00:13:17.687 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 91669 00:13:17.947 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:17.947 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:17.947 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91669 00:13:17.947 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:17.947 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:17.947 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91669' 00:13:17.947 killing process with pid 91669 00:13:17.947 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 91669 00:13:17.947 [2024-12-14 05:02:28.611357] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:17.947 [2024-12-14 05:02:28.611428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:17.947 [2024-12-14 05:02:28.611481] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:17.947 [2024-12-14 05:02:28.611490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:17.947 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 91669 00:13:17.947 [2024-12-14 05:02:28.644846] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:18.207 05:02:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:18.207 00:13:18.207 real 0m6.613s 00:13:18.207 user 0m10.999s 00:13:18.207 sys 0m1.502s 00:13:18.207 ************************************ 00:13:18.207 END TEST raid5f_superblock_test 00:13:18.207 ************************************ 00:13:18.207 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:18.207 05:02:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.207 05:02:28 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:13:18.207 05:02:28 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:13:18.207 05:02:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:18.207 05:02:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:18.207 05:02:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:18.207 ************************************ 00:13:18.207 START TEST raid5f_rebuild_test 00:13:18.207 ************************************ 00:13:18.207 05:02:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:13:18.207 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:18.207 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:18.207 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:18.207 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:18.207 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:18.207 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:18.207 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.207 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:18.207 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.207 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.207 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:18.207 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.207 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.207 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:18.207 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.207 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.207 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:18.207 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:18.207 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:18.207 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:18.207 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:18.208 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:18.208 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:18.208 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:18.208 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:18.208 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:18.208 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:18.208 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:18.208 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=92096 00:13:18.208 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 92096 00:13:18.208 05:02:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:18.208 05:02:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 92096 ']' 00:13:18.208 05:02:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.208 05:02:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:18.208 05:02:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.208 05:02:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:18.208 05:02:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.208 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:18.208 Zero copy mechanism will not be used. 00:13:18.208 [2024-12-14 05:02:29.074902] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:18.208 [2024-12-14 05:02:29.075028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92096 ] 00:13:18.468 [2024-12-14 05:02:29.234016] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.468 [2024-12-14 05:02:29.282297] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.468 [2024-12-14 05:02:29.325221] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:18.468 [2024-12-14 05:02:29.325257] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.039 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:19.039 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:13:19.039 05:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:19.039 05:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:19.039 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.039 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.039 BaseBdev1_malloc 00:13:19.039 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.039 05:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:19.039 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.039 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.039 [2024-12-14 05:02:29.908158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:19.039 [2024-12-14 05:02:29.908270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.040 [2024-12-14 05:02:29.908333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:19.040 [2024-12-14 05:02:29.908409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.040 [2024-12-14 05:02:29.910659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.040 [2024-12-14 05:02:29.910744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:19.040 BaseBdev1 00:13:19.040 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.040 05:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:19.040 05:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:19.040 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.040 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.309 BaseBdev2_malloc 00:13:19.309 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.309 05:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:19.309 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.309 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.309 [2024-12-14 05:02:29.946099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:19.309 [2024-12-14 05:02:29.946201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.309 [2024-12-14 05:02:29.946241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:19.309 [2024-12-14 05:02:29.946268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.309 [2024-12-14 05:02:29.948410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.309 [2024-12-14 05:02:29.948479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:19.309 BaseBdev2 00:13:19.309 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.309 05:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:19.309 05:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:19.309 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.309 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.309 BaseBdev3_malloc 00:13:19.309 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.309 05:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:19.309 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.309 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.309 [2024-12-14 05:02:29.974893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:19.309 [2024-12-14 05:02:29.974988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.309 [2024-12-14 05:02:29.975030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:19.309 [2024-12-14 05:02:29.975057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.309 [2024-12-14 05:02:29.977256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.309 [2024-12-14 05:02:29.977324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:19.309 BaseBdev3 00:13:19.309 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.309 05:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:19.309 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.309 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.309 spare_malloc 00:13:19.309 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.309 05:02:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:19.309 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.309 05:02:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.309 spare_delay 00:13:19.309 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.309 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:19.309 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.309 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.309 [2024-12-14 05:02:30.015664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:19.309 [2024-12-14 05:02:30.015748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.309 [2024-12-14 05:02:30.015787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:19.309 [2024-12-14 05:02:30.015815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.309 [2024-12-14 05:02:30.017806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.309 [2024-12-14 05:02:30.017874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:19.309 spare 00:13:19.309 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.309 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:19.309 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.309 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.309 [2024-12-14 05:02:30.027702] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:19.309 [2024-12-14 05:02:30.029483] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:19.309 [2024-12-14 05:02:30.029584] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:19.309 [2024-12-14 05:02:30.029695] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:19.309 [2024-12-14 05:02:30.029738] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:19.309 [2024-12-14 05:02:30.029997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:19.309 [2024-12-14 05:02:30.030403] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:19.309 [2024-12-14 05:02:30.030415] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:19.309 [2024-12-14 05:02:30.030549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.309 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.309 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:19.309 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.309 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.309 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:19.309 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.309 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.309 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.309 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.309 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.309 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.309 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.309 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.310 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.310 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.310 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.310 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.310 "name": "raid_bdev1", 00:13:19.310 "uuid": "6deeeccd-5a20-4e0d-bdf9-318a3db9526e", 00:13:19.310 "strip_size_kb": 64, 00:13:19.310 "state": "online", 00:13:19.310 "raid_level": "raid5f", 00:13:19.310 "superblock": false, 00:13:19.310 "num_base_bdevs": 3, 00:13:19.310 "num_base_bdevs_discovered": 3, 00:13:19.310 "num_base_bdevs_operational": 3, 00:13:19.310 "base_bdevs_list": [ 00:13:19.310 { 00:13:19.310 "name": "BaseBdev1", 00:13:19.310 "uuid": "35d7d748-c258-5029-b1b3-7b764f596c57", 00:13:19.310 "is_configured": true, 00:13:19.310 "data_offset": 0, 00:13:19.310 "data_size": 65536 00:13:19.310 }, 00:13:19.310 { 00:13:19.310 "name": "BaseBdev2", 00:13:19.310 "uuid": "18ba6433-0e59-5207-b7f8-24df39ff495c", 00:13:19.310 "is_configured": true, 00:13:19.310 "data_offset": 0, 00:13:19.310 "data_size": 65536 00:13:19.310 }, 00:13:19.310 { 00:13:19.310 "name": "BaseBdev3", 00:13:19.310 "uuid": "05a6fd19-ccb3-5610-92b4-4a7fc6277aaa", 00:13:19.310 "is_configured": true, 00:13:19.310 "data_offset": 0, 00:13:19.310 "data_size": 65536 00:13:19.310 } 00:13:19.310 ] 00:13:19.310 }' 00:13:19.310 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.310 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.902 [2024-12-14 05:02:30.507366] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:19.902 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:19.902 [2024-12-14 05:02:30.782728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:20.162 /dev/nbd0 00:13:20.162 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:20.162 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:20.162 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:20.162 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:20.162 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:20.162 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:20.162 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:20.162 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:20.162 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:20.162 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:20.162 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.162 1+0 records in 00:13:20.162 1+0 records out 00:13:20.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480256 s, 8.5 MB/s 00:13:20.162 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.162 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:20.162 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.162 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:20.162 05:02:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:20.162 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:20.163 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:20.163 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:20.163 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:20.163 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:20.163 05:02:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:13:20.422 512+0 records in 00:13:20.422 512+0 records out 00:13:20.422 67108864 bytes (67 MB, 64 MiB) copied, 0.308089 s, 218 MB/s 00:13:20.422 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:20.422 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:20.422 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:20.422 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:20.422 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:20.422 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.423 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:20.682 [2024-12-14 05:02:31.387155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.682 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:20.682 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:20.682 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:20.682 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.683 [2024-12-14 05:02:31.415192] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.683 "name": "raid_bdev1", 00:13:20.683 "uuid": "6deeeccd-5a20-4e0d-bdf9-318a3db9526e", 00:13:20.683 "strip_size_kb": 64, 00:13:20.683 "state": "online", 00:13:20.683 "raid_level": "raid5f", 00:13:20.683 "superblock": false, 00:13:20.683 "num_base_bdevs": 3, 00:13:20.683 "num_base_bdevs_discovered": 2, 00:13:20.683 "num_base_bdevs_operational": 2, 00:13:20.683 "base_bdevs_list": [ 00:13:20.683 { 00:13:20.683 "name": null, 00:13:20.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.683 "is_configured": false, 00:13:20.683 "data_offset": 0, 00:13:20.683 "data_size": 65536 00:13:20.683 }, 00:13:20.683 { 00:13:20.683 "name": "BaseBdev2", 00:13:20.683 "uuid": "18ba6433-0e59-5207-b7f8-24df39ff495c", 00:13:20.683 "is_configured": true, 00:13:20.683 "data_offset": 0, 00:13:20.683 "data_size": 65536 00:13:20.683 }, 00:13:20.683 { 00:13:20.683 "name": "BaseBdev3", 00:13:20.683 "uuid": "05a6fd19-ccb3-5610-92b4-4a7fc6277aaa", 00:13:20.683 "is_configured": true, 00:13:20.683 "data_offset": 0, 00:13:20.683 "data_size": 65536 00:13:20.683 } 00:13:20.683 ] 00:13:20.683 }' 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.683 05:02:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.252 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:21.252 05:02:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.252 05:02:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.253 [2024-12-14 05:02:31.874386] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:21.253 [2024-12-14 05:02:31.878314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:13:21.253 05:02:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.253 05:02:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:21.253 [2024-12-14 05:02:31.880443] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:22.191 05:02:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.191 05:02:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.191 05:02:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.191 05:02:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.191 05:02:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.191 05:02:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.191 05:02:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.191 05:02:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.191 05:02:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.191 05:02:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.191 05:02:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.191 "name": "raid_bdev1", 00:13:22.191 "uuid": "6deeeccd-5a20-4e0d-bdf9-318a3db9526e", 00:13:22.191 "strip_size_kb": 64, 00:13:22.191 "state": "online", 00:13:22.191 "raid_level": "raid5f", 00:13:22.191 "superblock": false, 00:13:22.191 "num_base_bdevs": 3, 00:13:22.191 "num_base_bdevs_discovered": 3, 00:13:22.191 "num_base_bdevs_operational": 3, 00:13:22.191 "process": { 00:13:22.191 "type": "rebuild", 00:13:22.191 "target": "spare", 00:13:22.191 "progress": { 00:13:22.191 "blocks": 20480, 00:13:22.191 "percent": 15 00:13:22.191 } 00:13:22.191 }, 00:13:22.191 "base_bdevs_list": [ 00:13:22.191 { 00:13:22.191 "name": "spare", 00:13:22.191 "uuid": "7fd94674-a1d2-5b93-9505-5e915e4f9f6f", 00:13:22.191 "is_configured": true, 00:13:22.191 "data_offset": 0, 00:13:22.191 "data_size": 65536 00:13:22.191 }, 00:13:22.191 { 00:13:22.191 "name": "BaseBdev2", 00:13:22.191 "uuid": "18ba6433-0e59-5207-b7f8-24df39ff495c", 00:13:22.191 "is_configured": true, 00:13:22.191 "data_offset": 0, 00:13:22.191 "data_size": 65536 00:13:22.191 }, 00:13:22.191 { 00:13:22.191 "name": "BaseBdev3", 00:13:22.191 "uuid": "05a6fd19-ccb3-5610-92b4-4a7fc6277aaa", 00:13:22.191 "is_configured": true, 00:13:22.191 "data_offset": 0, 00:13:22.191 "data_size": 65536 00:13:22.191 } 00:13:22.191 ] 00:13:22.191 }' 00:13:22.191 05:02:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.191 05:02:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.191 05:02:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.191 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.191 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:22.191 05:02:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.191 05:02:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.191 [2024-12-14 05:02:33.041044] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:22.451 [2024-12-14 05:02:33.087248] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:22.451 [2024-12-14 05:02:33.087393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.451 [2024-12-14 05:02:33.087431] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:22.451 [2024-12-14 05:02:33.087456] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:22.451 05:02:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.451 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:22.451 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.451 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.451 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:22.451 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.451 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:22.451 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.451 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.451 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.451 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.451 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.451 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.451 05:02:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.451 05:02:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.451 05:02:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.451 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.451 "name": "raid_bdev1", 00:13:22.451 "uuid": "6deeeccd-5a20-4e0d-bdf9-318a3db9526e", 00:13:22.451 "strip_size_kb": 64, 00:13:22.451 "state": "online", 00:13:22.451 "raid_level": "raid5f", 00:13:22.451 "superblock": false, 00:13:22.451 "num_base_bdevs": 3, 00:13:22.451 "num_base_bdevs_discovered": 2, 00:13:22.451 "num_base_bdevs_operational": 2, 00:13:22.451 "base_bdevs_list": [ 00:13:22.451 { 00:13:22.451 "name": null, 00:13:22.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.451 "is_configured": false, 00:13:22.451 "data_offset": 0, 00:13:22.451 "data_size": 65536 00:13:22.451 }, 00:13:22.451 { 00:13:22.451 "name": "BaseBdev2", 00:13:22.451 "uuid": "18ba6433-0e59-5207-b7f8-24df39ff495c", 00:13:22.451 "is_configured": true, 00:13:22.451 "data_offset": 0, 00:13:22.451 "data_size": 65536 00:13:22.451 }, 00:13:22.451 { 00:13:22.451 "name": "BaseBdev3", 00:13:22.451 "uuid": "05a6fd19-ccb3-5610-92b4-4a7fc6277aaa", 00:13:22.451 "is_configured": true, 00:13:22.451 "data_offset": 0, 00:13:22.451 "data_size": 65536 00:13:22.451 } 00:13:22.451 ] 00:13:22.451 }' 00:13:22.451 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.451 05:02:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.711 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:22.711 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.711 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:22.711 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:22.711 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.711 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.711 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.711 05:02:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.711 05:02:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.711 05:02:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.971 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.971 "name": "raid_bdev1", 00:13:22.971 "uuid": "6deeeccd-5a20-4e0d-bdf9-318a3db9526e", 00:13:22.971 "strip_size_kb": 64, 00:13:22.971 "state": "online", 00:13:22.971 "raid_level": "raid5f", 00:13:22.971 "superblock": false, 00:13:22.971 "num_base_bdevs": 3, 00:13:22.971 "num_base_bdevs_discovered": 2, 00:13:22.971 "num_base_bdevs_operational": 2, 00:13:22.971 "base_bdevs_list": [ 00:13:22.971 { 00:13:22.971 "name": null, 00:13:22.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.971 "is_configured": false, 00:13:22.971 "data_offset": 0, 00:13:22.971 "data_size": 65536 00:13:22.971 }, 00:13:22.971 { 00:13:22.971 "name": "BaseBdev2", 00:13:22.971 "uuid": "18ba6433-0e59-5207-b7f8-24df39ff495c", 00:13:22.971 "is_configured": true, 00:13:22.971 "data_offset": 0, 00:13:22.971 "data_size": 65536 00:13:22.971 }, 00:13:22.971 { 00:13:22.971 "name": "BaseBdev3", 00:13:22.971 "uuid": "05a6fd19-ccb3-5610-92b4-4a7fc6277aaa", 00:13:22.971 "is_configured": true, 00:13:22.971 "data_offset": 0, 00:13:22.971 "data_size": 65536 00:13:22.971 } 00:13:22.971 ] 00:13:22.971 }' 00:13:22.971 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.971 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:22.971 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.971 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:22.971 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:22.971 05:02:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.971 05:02:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.971 [2024-12-14 05:02:33.711785] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:22.971 [2024-12-14 05:02:33.715271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:13:22.971 [2024-12-14 05:02:33.717327] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:22.971 05:02:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.971 05:02:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:23.921 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.921 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.921 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.921 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.921 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.921 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.921 05:02:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.921 05:02:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.921 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.921 05:02:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.921 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.921 "name": "raid_bdev1", 00:13:23.921 "uuid": "6deeeccd-5a20-4e0d-bdf9-318a3db9526e", 00:13:23.921 "strip_size_kb": 64, 00:13:23.921 "state": "online", 00:13:23.921 "raid_level": "raid5f", 00:13:23.921 "superblock": false, 00:13:23.921 "num_base_bdevs": 3, 00:13:23.921 "num_base_bdevs_discovered": 3, 00:13:23.921 "num_base_bdevs_operational": 3, 00:13:23.921 "process": { 00:13:23.921 "type": "rebuild", 00:13:23.921 "target": "spare", 00:13:23.921 "progress": { 00:13:23.921 "blocks": 20480, 00:13:23.921 "percent": 15 00:13:23.921 } 00:13:23.921 }, 00:13:23.921 "base_bdevs_list": [ 00:13:23.921 { 00:13:23.921 "name": "spare", 00:13:23.921 "uuid": "7fd94674-a1d2-5b93-9505-5e915e4f9f6f", 00:13:23.921 "is_configured": true, 00:13:23.921 "data_offset": 0, 00:13:23.921 "data_size": 65536 00:13:23.921 }, 00:13:23.921 { 00:13:23.921 "name": "BaseBdev2", 00:13:23.921 "uuid": "18ba6433-0e59-5207-b7f8-24df39ff495c", 00:13:23.921 "is_configured": true, 00:13:23.921 "data_offset": 0, 00:13:23.921 "data_size": 65536 00:13:23.921 }, 00:13:23.921 { 00:13:23.921 "name": "BaseBdev3", 00:13:23.921 "uuid": "05a6fd19-ccb3-5610-92b4-4a7fc6277aaa", 00:13:23.921 "is_configured": true, 00:13:23.921 "data_offset": 0, 00:13:23.921 "data_size": 65536 00:13:23.921 } 00:13:23.921 ] 00:13:23.921 }' 00:13:23.921 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.180 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.180 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.180 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.180 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:24.180 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:24.180 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:24.180 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=445 00:13:24.180 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:24.180 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.180 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.180 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.180 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.180 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.180 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.180 05:02:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.181 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.181 05:02:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.181 05:02:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.181 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.181 "name": "raid_bdev1", 00:13:24.181 "uuid": "6deeeccd-5a20-4e0d-bdf9-318a3db9526e", 00:13:24.181 "strip_size_kb": 64, 00:13:24.181 "state": "online", 00:13:24.181 "raid_level": "raid5f", 00:13:24.181 "superblock": false, 00:13:24.181 "num_base_bdevs": 3, 00:13:24.181 "num_base_bdevs_discovered": 3, 00:13:24.181 "num_base_bdevs_operational": 3, 00:13:24.181 "process": { 00:13:24.181 "type": "rebuild", 00:13:24.181 "target": "spare", 00:13:24.181 "progress": { 00:13:24.181 "blocks": 22528, 00:13:24.181 "percent": 17 00:13:24.181 } 00:13:24.181 }, 00:13:24.181 "base_bdevs_list": [ 00:13:24.181 { 00:13:24.181 "name": "spare", 00:13:24.181 "uuid": "7fd94674-a1d2-5b93-9505-5e915e4f9f6f", 00:13:24.181 "is_configured": true, 00:13:24.181 "data_offset": 0, 00:13:24.181 "data_size": 65536 00:13:24.181 }, 00:13:24.181 { 00:13:24.181 "name": "BaseBdev2", 00:13:24.181 "uuid": "18ba6433-0e59-5207-b7f8-24df39ff495c", 00:13:24.181 "is_configured": true, 00:13:24.181 "data_offset": 0, 00:13:24.181 "data_size": 65536 00:13:24.181 }, 00:13:24.181 { 00:13:24.181 "name": "BaseBdev3", 00:13:24.181 "uuid": "05a6fd19-ccb3-5610-92b4-4a7fc6277aaa", 00:13:24.181 "is_configured": true, 00:13:24.181 "data_offset": 0, 00:13:24.181 "data_size": 65536 00:13:24.181 } 00:13:24.181 ] 00:13:24.181 }' 00:13:24.181 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.181 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.181 05:02:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.181 05:02:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.181 05:02:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:25.561 05:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:25.561 05:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.561 05:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.561 05:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.561 05:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.561 05:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.561 05:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.561 05:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.561 05:02:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.561 05:02:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.561 05:02:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.561 05:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.561 "name": "raid_bdev1", 00:13:25.561 "uuid": "6deeeccd-5a20-4e0d-bdf9-318a3db9526e", 00:13:25.561 "strip_size_kb": 64, 00:13:25.561 "state": "online", 00:13:25.561 "raid_level": "raid5f", 00:13:25.561 "superblock": false, 00:13:25.561 "num_base_bdevs": 3, 00:13:25.561 "num_base_bdevs_discovered": 3, 00:13:25.561 "num_base_bdevs_operational": 3, 00:13:25.561 "process": { 00:13:25.561 "type": "rebuild", 00:13:25.561 "target": "spare", 00:13:25.561 "progress": { 00:13:25.561 "blocks": 47104, 00:13:25.561 "percent": 35 00:13:25.561 } 00:13:25.561 }, 00:13:25.561 "base_bdevs_list": [ 00:13:25.561 { 00:13:25.561 "name": "spare", 00:13:25.561 "uuid": "7fd94674-a1d2-5b93-9505-5e915e4f9f6f", 00:13:25.561 "is_configured": true, 00:13:25.561 "data_offset": 0, 00:13:25.561 "data_size": 65536 00:13:25.561 }, 00:13:25.561 { 00:13:25.561 "name": "BaseBdev2", 00:13:25.561 "uuid": "18ba6433-0e59-5207-b7f8-24df39ff495c", 00:13:25.561 "is_configured": true, 00:13:25.561 "data_offset": 0, 00:13:25.561 "data_size": 65536 00:13:25.561 }, 00:13:25.561 { 00:13:25.561 "name": "BaseBdev3", 00:13:25.561 "uuid": "05a6fd19-ccb3-5610-92b4-4a7fc6277aaa", 00:13:25.561 "is_configured": true, 00:13:25.561 "data_offset": 0, 00:13:25.561 "data_size": 65536 00:13:25.561 } 00:13:25.561 ] 00:13:25.561 }' 00:13:25.561 05:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.561 05:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.561 05:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.561 05:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.561 05:02:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:26.500 05:02:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:26.500 05:02:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.500 05:02:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.500 05:02:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.500 05:02:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.500 05:02:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.500 05:02:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.500 05:02:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.500 05:02:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.500 05:02:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.500 05:02:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.500 05:02:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.500 "name": "raid_bdev1", 00:13:26.500 "uuid": "6deeeccd-5a20-4e0d-bdf9-318a3db9526e", 00:13:26.500 "strip_size_kb": 64, 00:13:26.500 "state": "online", 00:13:26.500 "raid_level": "raid5f", 00:13:26.500 "superblock": false, 00:13:26.500 "num_base_bdevs": 3, 00:13:26.500 "num_base_bdevs_discovered": 3, 00:13:26.500 "num_base_bdevs_operational": 3, 00:13:26.500 "process": { 00:13:26.500 "type": "rebuild", 00:13:26.500 "target": "spare", 00:13:26.500 "progress": { 00:13:26.500 "blocks": 69632, 00:13:26.500 "percent": 53 00:13:26.500 } 00:13:26.500 }, 00:13:26.500 "base_bdevs_list": [ 00:13:26.500 { 00:13:26.500 "name": "spare", 00:13:26.500 "uuid": "7fd94674-a1d2-5b93-9505-5e915e4f9f6f", 00:13:26.500 "is_configured": true, 00:13:26.500 "data_offset": 0, 00:13:26.500 "data_size": 65536 00:13:26.500 }, 00:13:26.500 { 00:13:26.500 "name": "BaseBdev2", 00:13:26.500 "uuid": "18ba6433-0e59-5207-b7f8-24df39ff495c", 00:13:26.500 "is_configured": true, 00:13:26.500 "data_offset": 0, 00:13:26.500 "data_size": 65536 00:13:26.500 }, 00:13:26.500 { 00:13:26.500 "name": "BaseBdev3", 00:13:26.500 "uuid": "05a6fd19-ccb3-5610-92b4-4a7fc6277aaa", 00:13:26.500 "is_configured": true, 00:13:26.500 "data_offset": 0, 00:13:26.500 "data_size": 65536 00:13:26.500 } 00:13:26.500 ] 00:13:26.500 }' 00:13:26.500 05:02:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.500 05:02:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.500 05:02:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.500 05:02:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.500 05:02:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:27.439 05:02:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.439 05:02:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.439 05:02:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.439 05:02:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.439 05:02:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.439 05:02:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.439 05:02:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.439 05:02:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.439 05:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.439 05:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.699 05:02:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.699 05:02:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.699 "name": "raid_bdev1", 00:13:27.699 "uuid": "6deeeccd-5a20-4e0d-bdf9-318a3db9526e", 00:13:27.699 "strip_size_kb": 64, 00:13:27.699 "state": "online", 00:13:27.699 "raid_level": "raid5f", 00:13:27.699 "superblock": false, 00:13:27.699 "num_base_bdevs": 3, 00:13:27.699 "num_base_bdevs_discovered": 3, 00:13:27.699 "num_base_bdevs_operational": 3, 00:13:27.699 "process": { 00:13:27.699 "type": "rebuild", 00:13:27.699 "target": "spare", 00:13:27.699 "progress": { 00:13:27.699 "blocks": 92160, 00:13:27.699 "percent": 70 00:13:27.699 } 00:13:27.699 }, 00:13:27.699 "base_bdevs_list": [ 00:13:27.699 { 00:13:27.699 "name": "spare", 00:13:27.699 "uuid": "7fd94674-a1d2-5b93-9505-5e915e4f9f6f", 00:13:27.699 "is_configured": true, 00:13:27.699 "data_offset": 0, 00:13:27.699 "data_size": 65536 00:13:27.699 }, 00:13:27.699 { 00:13:27.699 "name": "BaseBdev2", 00:13:27.699 "uuid": "18ba6433-0e59-5207-b7f8-24df39ff495c", 00:13:27.699 "is_configured": true, 00:13:27.699 "data_offset": 0, 00:13:27.699 "data_size": 65536 00:13:27.699 }, 00:13:27.699 { 00:13:27.699 "name": "BaseBdev3", 00:13:27.699 "uuid": "05a6fd19-ccb3-5610-92b4-4a7fc6277aaa", 00:13:27.699 "is_configured": true, 00:13:27.699 "data_offset": 0, 00:13:27.699 "data_size": 65536 00:13:27.699 } 00:13:27.699 ] 00:13:27.699 }' 00:13:27.699 05:02:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.699 05:02:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.699 05:02:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.699 05:02:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.699 05:02:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:28.638 05:02:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:28.638 05:02:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.638 05:02:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.638 05:02:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.638 05:02:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.638 05:02:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.638 05:02:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.638 05:02:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.638 05:02:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.638 05:02:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.638 05:02:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.638 05:02:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.638 "name": "raid_bdev1", 00:13:28.638 "uuid": "6deeeccd-5a20-4e0d-bdf9-318a3db9526e", 00:13:28.638 "strip_size_kb": 64, 00:13:28.638 "state": "online", 00:13:28.638 "raid_level": "raid5f", 00:13:28.638 "superblock": false, 00:13:28.638 "num_base_bdevs": 3, 00:13:28.638 "num_base_bdevs_discovered": 3, 00:13:28.638 "num_base_bdevs_operational": 3, 00:13:28.638 "process": { 00:13:28.638 "type": "rebuild", 00:13:28.638 "target": "spare", 00:13:28.638 "progress": { 00:13:28.638 "blocks": 116736, 00:13:28.638 "percent": 89 00:13:28.638 } 00:13:28.638 }, 00:13:28.638 "base_bdevs_list": [ 00:13:28.638 { 00:13:28.638 "name": "spare", 00:13:28.638 "uuid": "7fd94674-a1d2-5b93-9505-5e915e4f9f6f", 00:13:28.638 "is_configured": true, 00:13:28.638 "data_offset": 0, 00:13:28.638 "data_size": 65536 00:13:28.638 }, 00:13:28.638 { 00:13:28.638 "name": "BaseBdev2", 00:13:28.638 "uuid": "18ba6433-0e59-5207-b7f8-24df39ff495c", 00:13:28.638 "is_configured": true, 00:13:28.638 "data_offset": 0, 00:13:28.638 "data_size": 65536 00:13:28.638 }, 00:13:28.638 { 00:13:28.638 "name": "BaseBdev3", 00:13:28.638 "uuid": "05a6fd19-ccb3-5610-92b4-4a7fc6277aaa", 00:13:28.638 "is_configured": true, 00:13:28.638 "data_offset": 0, 00:13:28.638 "data_size": 65536 00:13:28.638 } 00:13:28.638 ] 00:13:28.638 }' 00:13:28.638 05:02:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.897 05:02:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:28.897 05:02:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.897 05:02:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:28.897 05:02:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:29.465 [2024-12-14 05:02:40.151114] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:29.465 [2024-12-14 05:02:40.151205] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:29.465 [2024-12-14 05:02:40.151242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.725 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:29.725 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.725 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.725 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.725 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.725 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.725 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.725 05:02:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.725 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.725 05:02:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.985 05:02:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.985 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.985 "name": "raid_bdev1", 00:13:29.985 "uuid": "6deeeccd-5a20-4e0d-bdf9-318a3db9526e", 00:13:29.985 "strip_size_kb": 64, 00:13:29.985 "state": "online", 00:13:29.985 "raid_level": "raid5f", 00:13:29.985 "superblock": false, 00:13:29.985 "num_base_bdevs": 3, 00:13:29.985 "num_base_bdevs_discovered": 3, 00:13:29.985 "num_base_bdevs_operational": 3, 00:13:29.985 "base_bdevs_list": [ 00:13:29.985 { 00:13:29.985 "name": "spare", 00:13:29.985 "uuid": "7fd94674-a1d2-5b93-9505-5e915e4f9f6f", 00:13:29.985 "is_configured": true, 00:13:29.985 "data_offset": 0, 00:13:29.985 "data_size": 65536 00:13:29.985 }, 00:13:29.985 { 00:13:29.985 "name": "BaseBdev2", 00:13:29.985 "uuid": "18ba6433-0e59-5207-b7f8-24df39ff495c", 00:13:29.985 "is_configured": true, 00:13:29.985 "data_offset": 0, 00:13:29.985 "data_size": 65536 00:13:29.985 }, 00:13:29.985 { 00:13:29.985 "name": "BaseBdev3", 00:13:29.985 "uuid": "05a6fd19-ccb3-5610-92b4-4a7fc6277aaa", 00:13:29.985 "is_configured": true, 00:13:29.985 "data_offset": 0, 00:13:29.985 "data_size": 65536 00:13:29.985 } 00:13:29.985 ] 00:13:29.985 }' 00:13:29.985 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.985 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:29.985 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.985 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:29.985 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:29.985 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:29.985 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.985 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:29.985 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:29.985 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.985 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.985 05:02:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.985 05:02:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.985 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.985 05:02:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.985 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.985 "name": "raid_bdev1", 00:13:29.986 "uuid": "6deeeccd-5a20-4e0d-bdf9-318a3db9526e", 00:13:29.986 "strip_size_kb": 64, 00:13:29.986 "state": "online", 00:13:29.986 "raid_level": "raid5f", 00:13:29.986 "superblock": false, 00:13:29.986 "num_base_bdevs": 3, 00:13:29.986 "num_base_bdevs_discovered": 3, 00:13:29.986 "num_base_bdevs_operational": 3, 00:13:29.986 "base_bdevs_list": [ 00:13:29.986 { 00:13:29.986 "name": "spare", 00:13:29.986 "uuid": "7fd94674-a1d2-5b93-9505-5e915e4f9f6f", 00:13:29.986 "is_configured": true, 00:13:29.986 "data_offset": 0, 00:13:29.986 "data_size": 65536 00:13:29.986 }, 00:13:29.986 { 00:13:29.986 "name": "BaseBdev2", 00:13:29.986 "uuid": "18ba6433-0e59-5207-b7f8-24df39ff495c", 00:13:29.986 "is_configured": true, 00:13:29.986 "data_offset": 0, 00:13:29.986 "data_size": 65536 00:13:29.986 }, 00:13:29.986 { 00:13:29.986 "name": "BaseBdev3", 00:13:29.986 "uuid": "05a6fd19-ccb3-5610-92b4-4a7fc6277aaa", 00:13:29.986 "is_configured": true, 00:13:29.986 "data_offset": 0, 00:13:29.986 "data_size": 65536 00:13:29.986 } 00:13:29.986 ] 00:13:29.986 }' 00:13:29.986 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.986 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:29.986 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.245 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:30.245 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:30.245 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.245 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.245 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:30.245 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.245 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.245 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.246 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.246 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.246 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.246 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.246 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.246 05:02:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.246 05:02:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.246 05:02:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.246 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.246 "name": "raid_bdev1", 00:13:30.246 "uuid": "6deeeccd-5a20-4e0d-bdf9-318a3db9526e", 00:13:30.246 "strip_size_kb": 64, 00:13:30.246 "state": "online", 00:13:30.246 "raid_level": "raid5f", 00:13:30.246 "superblock": false, 00:13:30.246 "num_base_bdevs": 3, 00:13:30.246 "num_base_bdevs_discovered": 3, 00:13:30.246 "num_base_bdevs_operational": 3, 00:13:30.246 "base_bdevs_list": [ 00:13:30.246 { 00:13:30.246 "name": "spare", 00:13:30.246 "uuid": "7fd94674-a1d2-5b93-9505-5e915e4f9f6f", 00:13:30.246 "is_configured": true, 00:13:30.246 "data_offset": 0, 00:13:30.246 "data_size": 65536 00:13:30.246 }, 00:13:30.246 { 00:13:30.246 "name": "BaseBdev2", 00:13:30.246 "uuid": "18ba6433-0e59-5207-b7f8-24df39ff495c", 00:13:30.246 "is_configured": true, 00:13:30.246 "data_offset": 0, 00:13:30.246 "data_size": 65536 00:13:30.246 }, 00:13:30.246 { 00:13:30.246 "name": "BaseBdev3", 00:13:30.246 "uuid": "05a6fd19-ccb3-5610-92b4-4a7fc6277aaa", 00:13:30.246 "is_configured": true, 00:13:30.246 "data_offset": 0, 00:13:30.246 "data_size": 65536 00:13:30.246 } 00:13:30.246 ] 00:13:30.246 }' 00:13:30.246 05:02:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.246 05:02:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.505 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:30.505 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.505 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.505 [2024-12-14 05:02:41.326232] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:30.505 [2024-12-14 05:02:41.326304] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.505 [2024-12-14 05:02:41.326425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.505 [2024-12-14 05:02:41.326527] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.505 [2024-12-14 05:02:41.326604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:30.505 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.505 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.505 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:30.505 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.505 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.505 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:30.765 /dev/nbd0 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:30.765 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.765 1+0 records in 00:13:30.765 1+0 records out 00:13:30.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312563 s, 13.1 MB/s 00:13:30.766 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.766 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:30.766 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.766 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:30.766 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:30.766 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:30.766 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:30.766 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:31.026 /dev/nbd1 00:13:31.026 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:31.026 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:31.026 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:31.026 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:31.026 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:31.026 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:31.026 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:31.026 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:31.026 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:31.026 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:31.026 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.026 1+0 records in 00:13:31.026 1+0 records out 00:13:31.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440386 s, 9.3 MB/s 00:13:31.026 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.026 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:31.026 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.026 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:31.026 05:02:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:31.026 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.026 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:31.026 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:31.286 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:31.286 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:31.286 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:31.286 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:31.286 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:31.286 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:31.286 05:02:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:31.286 05:02:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:31.286 05:02:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:31.286 05:02:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:31.286 05:02:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.286 05:02:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.286 05:02:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:31.286 05:02:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:31.286 05:02:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.286 05:02:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:31.286 05:02:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:31.546 05:02:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:31.546 05:02:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:31.546 05:02:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:31.546 05:02:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.546 05:02:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.546 05:02:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:31.546 05:02:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:31.546 05:02:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.546 05:02:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:31.546 05:02:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 92096 00:13:31.546 05:02:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 92096 ']' 00:13:31.546 05:02:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 92096 00:13:31.546 05:02:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:13:31.546 05:02:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:31.546 05:02:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92096 00:13:31.546 05:02:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:31.546 killing process with pid 92096 00:13:31.546 Received shutdown signal, test time was about 60.000000 seconds 00:13:31.546 00:13:31.546 Latency(us) 00:13:31.546 [2024-12-14T05:02:42.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.546 [2024-12-14T05:02:42.429Z] =================================================================================================================== 00:13:31.546 [2024-12-14T05:02:42.429Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:31.546 05:02:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:31.546 05:02:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92096' 00:13:31.546 05:02:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 92096 00:13:31.546 [2024-12-14 05:02:42.379169] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:31.546 05:02:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 92096 00:13:31.546 [2024-12-14 05:02:42.419517] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:31.806 ************************************ 00:13:31.806 END TEST raid5f_rebuild_test 00:13:31.806 ************************************ 00:13:31.806 05:02:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:31.806 00:13:31.806 real 0m13.670s 00:13:31.806 user 0m17.083s 00:13:31.806 sys 0m2.073s 00:13:31.806 05:02:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:31.806 05:02:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.067 05:02:42 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:13:32.067 05:02:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:32.067 05:02:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.067 05:02:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:32.067 ************************************ 00:13:32.067 START TEST raid5f_rebuild_test_sb 00:13:32.067 ************************************ 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=92525 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 92525 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 92525 ']' 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:32.067 05:02:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.067 [2024-12-14 05:02:42.824988] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:32.067 [2024-12-14 05:02:42.825198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92525 ] 00:13:32.067 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:32.067 Zero copy mechanism will not be used. 00:13:32.327 [2024-12-14 05:02:42.987562] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.327 [2024-12-14 05:02:43.034803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.327 [2024-12-14 05:02:43.078062] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.327 [2024-12-14 05:02:43.078195] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.897 BaseBdev1_malloc 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.897 [2024-12-14 05:02:43.684989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:32.897 [2024-12-14 05:02:43.685092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.897 [2024-12-14 05:02:43.685138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:32.897 [2024-12-14 05:02:43.685180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.897 [2024-12-14 05:02:43.687220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.897 [2024-12-14 05:02:43.687311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:32.897 BaseBdev1 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.897 BaseBdev2_malloc 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.897 [2024-12-14 05:02:43.729945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:32.897 [2024-12-14 05:02:43.730141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.897 [2024-12-14 05:02:43.730262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:32.897 [2024-12-14 05:02:43.730344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.897 [2024-12-14 05:02:43.735114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.897 [2024-12-14 05:02:43.735301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:32.897 BaseBdev2 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.897 BaseBdev3_malloc 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.897 [2024-12-14 05:02:43.761752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:32.897 [2024-12-14 05:02:43.761841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.897 [2024-12-14 05:02:43.761883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:32.897 [2024-12-14 05:02:43.761911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.897 [2024-12-14 05:02:43.763968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.897 [2024-12-14 05:02:43.764040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:32.897 BaseBdev3 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.897 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.157 spare_malloc 00:13:33.157 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.157 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:33.157 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.157 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.157 spare_delay 00:13:33.157 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.157 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:33.157 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.157 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.157 [2024-12-14 05:02:43.802729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:33.157 [2024-12-14 05:02:43.802831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.157 [2024-12-14 05:02:43.802857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:33.158 [2024-12-14 05:02:43.802865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.158 [2024-12-14 05:02:43.804910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.158 [2024-12-14 05:02:43.804946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:33.158 spare 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.158 [2024-12-14 05:02:43.814785] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:33.158 [2024-12-14 05:02:43.816595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:33.158 [2024-12-14 05:02:43.816702] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:33.158 [2024-12-14 05:02:43.816875] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:33.158 [2024-12-14 05:02:43.816924] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:33.158 [2024-12-14 05:02:43.817184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:33.158 [2024-12-14 05:02:43.817619] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:33.158 [2024-12-14 05:02:43.817666] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:33.158 [2024-12-14 05:02:43.817823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.158 "name": "raid_bdev1", 00:13:33.158 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:33.158 "strip_size_kb": 64, 00:13:33.158 "state": "online", 00:13:33.158 "raid_level": "raid5f", 00:13:33.158 "superblock": true, 00:13:33.158 "num_base_bdevs": 3, 00:13:33.158 "num_base_bdevs_discovered": 3, 00:13:33.158 "num_base_bdevs_operational": 3, 00:13:33.158 "base_bdevs_list": [ 00:13:33.158 { 00:13:33.158 "name": "BaseBdev1", 00:13:33.158 "uuid": "1ceba6dc-3390-5db4-ab25-5a8e1f73e16f", 00:13:33.158 "is_configured": true, 00:13:33.158 "data_offset": 2048, 00:13:33.158 "data_size": 63488 00:13:33.158 }, 00:13:33.158 { 00:13:33.158 "name": "BaseBdev2", 00:13:33.158 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:33.158 "is_configured": true, 00:13:33.158 "data_offset": 2048, 00:13:33.158 "data_size": 63488 00:13:33.158 }, 00:13:33.158 { 00:13:33.158 "name": "BaseBdev3", 00:13:33.158 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:33.158 "is_configured": true, 00:13:33.158 "data_offset": 2048, 00:13:33.158 "data_size": 63488 00:13:33.158 } 00:13:33.158 ] 00:13:33.158 }' 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.158 05:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.418 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:33.418 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.418 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.418 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:33.418 [2024-12-14 05:02:44.258560] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:33.418 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.418 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:13:33.678 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.678 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.678 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:33.678 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.678 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.678 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:33.678 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:33.678 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:33.678 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:33.678 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:33.678 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:33.678 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:33.678 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:33.678 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:33.678 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:33.678 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:33.678 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:33.678 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:33.678 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:33.678 [2024-12-14 05:02:44.510006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:33.678 /dev/nbd0 00:13:33.678 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:33.938 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:33.938 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:33.938 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:33.938 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:33.938 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:33.938 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:33.938 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:33.938 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:33.938 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:33.938 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.938 1+0 records in 00:13:33.938 1+0 records out 00:13:33.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598362 s, 6.8 MB/s 00:13:33.938 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.938 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:33.938 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.938 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:33.938 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:33.938 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.938 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:33.938 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:33.938 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:33.938 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:33.938 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:13:34.198 496+0 records in 00:13:34.198 496+0 records out 00:13:34.198 65011712 bytes (65 MB, 62 MiB) copied, 0.283095 s, 230 MB/s 00:13:34.198 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:34.198 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:34.198 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:34.198 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:34.198 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:34.198 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.198 05:02:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:34.457 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:34.457 [2024-12-14 05:02:45.086613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.457 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:34.457 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:34.457 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.457 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.457 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:34.457 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:34.457 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.457 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:34.457 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.457 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.457 [2024-12-14 05:02:45.105025] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:34.457 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.457 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:34.457 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.457 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.457 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:34.458 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.458 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:34.458 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.458 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.458 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.458 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.458 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.458 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.458 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.458 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.458 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.458 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.458 "name": "raid_bdev1", 00:13:34.458 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:34.458 "strip_size_kb": 64, 00:13:34.458 "state": "online", 00:13:34.458 "raid_level": "raid5f", 00:13:34.458 "superblock": true, 00:13:34.458 "num_base_bdevs": 3, 00:13:34.458 "num_base_bdevs_discovered": 2, 00:13:34.458 "num_base_bdevs_operational": 2, 00:13:34.458 "base_bdevs_list": [ 00:13:34.458 { 00:13:34.458 "name": null, 00:13:34.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.458 "is_configured": false, 00:13:34.458 "data_offset": 0, 00:13:34.458 "data_size": 63488 00:13:34.458 }, 00:13:34.458 { 00:13:34.458 "name": "BaseBdev2", 00:13:34.458 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:34.458 "is_configured": true, 00:13:34.458 "data_offset": 2048, 00:13:34.458 "data_size": 63488 00:13:34.458 }, 00:13:34.458 { 00:13:34.458 "name": "BaseBdev3", 00:13:34.458 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:34.458 "is_configured": true, 00:13:34.458 "data_offset": 2048, 00:13:34.458 "data_size": 63488 00:13:34.458 } 00:13:34.458 ] 00:13:34.458 }' 00:13:34.458 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.458 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.717 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:34.717 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.717 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.717 [2024-12-14 05:02:45.576273] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:34.717 [2024-12-14 05:02:45.580141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028de0 00:13:34.717 [2024-12-14 05:02:45.582258] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:34.717 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.717 05:02:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.098 "name": "raid_bdev1", 00:13:36.098 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:36.098 "strip_size_kb": 64, 00:13:36.098 "state": "online", 00:13:36.098 "raid_level": "raid5f", 00:13:36.098 "superblock": true, 00:13:36.098 "num_base_bdevs": 3, 00:13:36.098 "num_base_bdevs_discovered": 3, 00:13:36.098 "num_base_bdevs_operational": 3, 00:13:36.098 "process": { 00:13:36.098 "type": "rebuild", 00:13:36.098 "target": "spare", 00:13:36.098 "progress": { 00:13:36.098 "blocks": 20480, 00:13:36.098 "percent": 16 00:13:36.098 } 00:13:36.098 }, 00:13:36.098 "base_bdevs_list": [ 00:13:36.098 { 00:13:36.098 "name": "spare", 00:13:36.098 "uuid": "ca26335a-e7ce-58e3-b141-0b1a0f206e2e", 00:13:36.098 "is_configured": true, 00:13:36.098 "data_offset": 2048, 00:13:36.098 "data_size": 63488 00:13:36.098 }, 00:13:36.098 { 00:13:36.098 "name": "BaseBdev2", 00:13:36.098 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:36.098 "is_configured": true, 00:13:36.098 "data_offset": 2048, 00:13:36.098 "data_size": 63488 00:13:36.098 }, 00:13:36.098 { 00:13:36.098 "name": "BaseBdev3", 00:13:36.098 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:36.098 "is_configured": true, 00:13:36.098 "data_offset": 2048, 00:13:36.098 "data_size": 63488 00:13:36.098 } 00:13:36.098 ] 00:13:36.098 }' 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.098 [2024-12-14 05:02:46.745095] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:36.098 [2024-12-14 05:02:46.789022] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:36.098 [2024-12-14 05:02:46.789080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.098 [2024-12-14 05:02:46.789095] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:36.098 [2024-12-14 05:02:46.789106] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.098 "name": "raid_bdev1", 00:13:36.098 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:36.098 "strip_size_kb": 64, 00:13:36.098 "state": "online", 00:13:36.098 "raid_level": "raid5f", 00:13:36.098 "superblock": true, 00:13:36.098 "num_base_bdevs": 3, 00:13:36.098 "num_base_bdevs_discovered": 2, 00:13:36.098 "num_base_bdevs_operational": 2, 00:13:36.098 "base_bdevs_list": [ 00:13:36.098 { 00:13:36.098 "name": null, 00:13:36.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.098 "is_configured": false, 00:13:36.098 "data_offset": 0, 00:13:36.098 "data_size": 63488 00:13:36.098 }, 00:13:36.098 { 00:13:36.098 "name": "BaseBdev2", 00:13:36.098 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:36.098 "is_configured": true, 00:13:36.098 "data_offset": 2048, 00:13:36.098 "data_size": 63488 00:13:36.098 }, 00:13:36.098 { 00:13:36.098 "name": "BaseBdev3", 00:13:36.098 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:36.098 "is_configured": true, 00:13:36.098 "data_offset": 2048, 00:13:36.098 "data_size": 63488 00:13:36.098 } 00:13:36.098 ] 00:13:36.098 }' 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.098 05:02:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.358 05:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:36.358 05:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.358 05:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:36.358 05:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:36.358 05:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.358 05:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.358 05:02:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.358 05:02:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.358 05:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.358 05:02:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.618 05:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.618 "name": "raid_bdev1", 00:13:36.618 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:36.618 "strip_size_kb": 64, 00:13:36.618 "state": "online", 00:13:36.618 "raid_level": "raid5f", 00:13:36.618 "superblock": true, 00:13:36.618 "num_base_bdevs": 3, 00:13:36.618 "num_base_bdevs_discovered": 2, 00:13:36.618 "num_base_bdevs_operational": 2, 00:13:36.618 "base_bdevs_list": [ 00:13:36.618 { 00:13:36.618 "name": null, 00:13:36.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.618 "is_configured": false, 00:13:36.618 "data_offset": 0, 00:13:36.618 "data_size": 63488 00:13:36.618 }, 00:13:36.618 { 00:13:36.618 "name": "BaseBdev2", 00:13:36.618 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:36.618 "is_configured": true, 00:13:36.618 "data_offset": 2048, 00:13:36.618 "data_size": 63488 00:13:36.618 }, 00:13:36.618 { 00:13:36.618 "name": "BaseBdev3", 00:13:36.618 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:36.618 "is_configured": true, 00:13:36.618 "data_offset": 2048, 00:13:36.618 "data_size": 63488 00:13:36.618 } 00:13:36.618 ] 00:13:36.618 }' 00:13:36.618 05:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.619 05:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:36.619 05:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.619 05:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:36.619 05:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:36.619 05:02:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.619 05:02:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.619 [2024-12-14 05:02:47.369528] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:36.619 [2024-12-14 05:02:47.372999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028eb0 00:13:36.619 [2024-12-14 05:02:47.375124] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:36.619 05:02:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.619 05:02:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:37.558 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.558 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.558 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.558 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.558 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.558 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.558 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.558 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.558 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.558 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.558 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.558 "name": "raid_bdev1", 00:13:37.558 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:37.558 "strip_size_kb": 64, 00:13:37.558 "state": "online", 00:13:37.558 "raid_level": "raid5f", 00:13:37.558 "superblock": true, 00:13:37.558 "num_base_bdevs": 3, 00:13:37.558 "num_base_bdevs_discovered": 3, 00:13:37.558 "num_base_bdevs_operational": 3, 00:13:37.558 "process": { 00:13:37.558 "type": "rebuild", 00:13:37.558 "target": "spare", 00:13:37.558 "progress": { 00:13:37.558 "blocks": 20480, 00:13:37.558 "percent": 16 00:13:37.558 } 00:13:37.558 }, 00:13:37.558 "base_bdevs_list": [ 00:13:37.558 { 00:13:37.558 "name": "spare", 00:13:37.558 "uuid": "ca26335a-e7ce-58e3-b141-0b1a0f206e2e", 00:13:37.558 "is_configured": true, 00:13:37.558 "data_offset": 2048, 00:13:37.558 "data_size": 63488 00:13:37.558 }, 00:13:37.558 { 00:13:37.558 "name": "BaseBdev2", 00:13:37.558 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:37.558 "is_configured": true, 00:13:37.558 "data_offset": 2048, 00:13:37.558 "data_size": 63488 00:13:37.558 }, 00:13:37.558 { 00:13:37.558 "name": "BaseBdev3", 00:13:37.558 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:37.558 "is_configured": true, 00:13:37.558 "data_offset": 2048, 00:13:37.558 "data_size": 63488 00:13:37.558 } 00:13:37.558 ] 00:13:37.558 }' 00:13:37.558 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:37.818 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=459 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.818 "name": "raid_bdev1", 00:13:37.818 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:37.818 "strip_size_kb": 64, 00:13:37.818 "state": "online", 00:13:37.818 "raid_level": "raid5f", 00:13:37.818 "superblock": true, 00:13:37.818 "num_base_bdevs": 3, 00:13:37.818 "num_base_bdevs_discovered": 3, 00:13:37.818 "num_base_bdevs_operational": 3, 00:13:37.818 "process": { 00:13:37.818 "type": "rebuild", 00:13:37.818 "target": "spare", 00:13:37.818 "progress": { 00:13:37.818 "blocks": 22528, 00:13:37.818 "percent": 17 00:13:37.818 } 00:13:37.818 }, 00:13:37.818 "base_bdevs_list": [ 00:13:37.818 { 00:13:37.818 "name": "spare", 00:13:37.818 "uuid": "ca26335a-e7ce-58e3-b141-0b1a0f206e2e", 00:13:37.818 "is_configured": true, 00:13:37.818 "data_offset": 2048, 00:13:37.818 "data_size": 63488 00:13:37.818 }, 00:13:37.818 { 00:13:37.818 "name": "BaseBdev2", 00:13:37.818 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:37.818 "is_configured": true, 00:13:37.818 "data_offset": 2048, 00:13:37.818 "data_size": 63488 00:13:37.818 }, 00:13:37.818 { 00:13:37.818 "name": "BaseBdev3", 00:13:37.818 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:37.818 "is_configured": true, 00:13:37.818 "data_offset": 2048, 00:13:37.818 "data_size": 63488 00:13:37.818 } 00:13:37.818 ] 00:13:37.818 }' 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.818 05:02:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:38.757 05:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:38.757 05:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.757 05:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.757 05:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.757 05:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.757 05:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.757 05:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.757 05:02:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.757 05:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.757 05:02:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.757 05:02:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.018 05:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.018 "name": "raid_bdev1", 00:13:39.018 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:39.018 "strip_size_kb": 64, 00:13:39.018 "state": "online", 00:13:39.018 "raid_level": "raid5f", 00:13:39.018 "superblock": true, 00:13:39.018 "num_base_bdevs": 3, 00:13:39.018 "num_base_bdevs_discovered": 3, 00:13:39.018 "num_base_bdevs_operational": 3, 00:13:39.018 "process": { 00:13:39.018 "type": "rebuild", 00:13:39.018 "target": "spare", 00:13:39.018 "progress": { 00:13:39.018 "blocks": 45056, 00:13:39.018 "percent": 35 00:13:39.018 } 00:13:39.018 }, 00:13:39.018 "base_bdevs_list": [ 00:13:39.018 { 00:13:39.018 "name": "spare", 00:13:39.018 "uuid": "ca26335a-e7ce-58e3-b141-0b1a0f206e2e", 00:13:39.018 "is_configured": true, 00:13:39.018 "data_offset": 2048, 00:13:39.018 "data_size": 63488 00:13:39.018 }, 00:13:39.018 { 00:13:39.018 "name": "BaseBdev2", 00:13:39.018 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:39.018 "is_configured": true, 00:13:39.018 "data_offset": 2048, 00:13:39.018 "data_size": 63488 00:13:39.018 }, 00:13:39.018 { 00:13:39.018 "name": "BaseBdev3", 00:13:39.018 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:39.018 "is_configured": true, 00:13:39.018 "data_offset": 2048, 00:13:39.018 "data_size": 63488 00:13:39.018 } 00:13:39.018 ] 00:13:39.018 }' 00:13:39.018 05:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.018 05:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.018 05:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.018 05:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.018 05:02:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:39.957 05:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.957 05:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.957 05:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.957 05:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.957 05:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.957 05:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.957 05:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.957 05:02:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.957 05:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.957 05:02:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.957 05:02:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.957 05:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.957 "name": "raid_bdev1", 00:13:39.957 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:39.957 "strip_size_kb": 64, 00:13:39.957 "state": "online", 00:13:39.957 "raid_level": "raid5f", 00:13:39.957 "superblock": true, 00:13:39.957 "num_base_bdevs": 3, 00:13:39.957 "num_base_bdevs_discovered": 3, 00:13:39.957 "num_base_bdevs_operational": 3, 00:13:39.957 "process": { 00:13:39.957 "type": "rebuild", 00:13:39.957 "target": "spare", 00:13:39.957 "progress": { 00:13:39.957 "blocks": 67584, 00:13:39.957 "percent": 53 00:13:39.957 } 00:13:39.957 }, 00:13:39.957 "base_bdevs_list": [ 00:13:39.957 { 00:13:39.957 "name": "spare", 00:13:39.957 "uuid": "ca26335a-e7ce-58e3-b141-0b1a0f206e2e", 00:13:39.957 "is_configured": true, 00:13:39.957 "data_offset": 2048, 00:13:39.957 "data_size": 63488 00:13:39.957 }, 00:13:39.957 { 00:13:39.957 "name": "BaseBdev2", 00:13:39.957 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:39.957 "is_configured": true, 00:13:39.957 "data_offset": 2048, 00:13:39.957 "data_size": 63488 00:13:39.957 }, 00:13:39.957 { 00:13:39.957 "name": "BaseBdev3", 00:13:39.957 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:39.957 "is_configured": true, 00:13:39.957 "data_offset": 2048, 00:13:39.957 "data_size": 63488 00:13:39.957 } 00:13:39.957 ] 00:13:39.957 }' 00:13:39.957 05:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.217 05:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.217 05:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.217 05:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.217 05:02:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:41.156 05:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.156 05:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.156 05:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.156 05:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.156 05:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.156 05:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.156 05:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.156 05:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.156 05:02:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.156 05:02:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.156 05:02:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.156 05:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.156 "name": "raid_bdev1", 00:13:41.156 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:41.156 "strip_size_kb": 64, 00:13:41.156 "state": "online", 00:13:41.156 "raid_level": "raid5f", 00:13:41.156 "superblock": true, 00:13:41.156 "num_base_bdevs": 3, 00:13:41.156 "num_base_bdevs_discovered": 3, 00:13:41.156 "num_base_bdevs_operational": 3, 00:13:41.156 "process": { 00:13:41.156 "type": "rebuild", 00:13:41.156 "target": "spare", 00:13:41.156 "progress": { 00:13:41.156 "blocks": 92160, 00:13:41.156 "percent": 72 00:13:41.156 } 00:13:41.156 }, 00:13:41.156 "base_bdevs_list": [ 00:13:41.156 { 00:13:41.156 "name": "spare", 00:13:41.156 "uuid": "ca26335a-e7ce-58e3-b141-0b1a0f206e2e", 00:13:41.156 "is_configured": true, 00:13:41.156 "data_offset": 2048, 00:13:41.156 "data_size": 63488 00:13:41.156 }, 00:13:41.156 { 00:13:41.156 "name": "BaseBdev2", 00:13:41.156 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:41.156 "is_configured": true, 00:13:41.156 "data_offset": 2048, 00:13:41.156 "data_size": 63488 00:13:41.156 }, 00:13:41.156 { 00:13:41.156 "name": "BaseBdev3", 00:13:41.156 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:41.156 "is_configured": true, 00:13:41.156 "data_offset": 2048, 00:13:41.156 "data_size": 63488 00:13:41.156 } 00:13:41.156 ] 00:13:41.156 }' 00:13:41.156 05:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.156 05:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.156 05:02:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.415 05:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.415 05:02:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:42.352 05:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:42.352 05:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.352 05:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.352 05:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.352 05:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.352 05:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.352 05:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.352 05:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.352 05:02:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.352 05:02:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.352 05:02:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.352 05:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.352 "name": "raid_bdev1", 00:13:42.352 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:42.352 "strip_size_kb": 64, 00:13:42.352 "state": "online", 00:13:42.352 "raid_level": "raid5f", 00:13:42.352 "superblock": true, 00:13:42.352 "num_base_bdevs": 3, 00:13:42.352 "num_base_bdevs_discovered": 3, 00:13:42.352 "num_base_bdevs_operational": 3, 00:13:42.352 "process": { 00:13:42.352 "type": "rebuild", 00:13:42.352 "target": "spare", 00:13:42.352 "progress": { 00:13:42.352 "blocks": 114688, 00:13:42.352 "percent": 90 00:13:42.352 } 00:13:42.352 }, 00:13:42.352 "base_bdevs_list": [ 00:13:42.352 { 00:13:42.352 "name": "spare", 00:13:42.352 "uuid": "ca26335a-e7ce-58e3-b141-0b1a0f206e2e", 00:13:42.352 "is_configured": true, 00:13:42.352 "data_offset": 2048, 00:13:42.352 "data_size": 63488 00:13:42.352 }, 00:13:42.352 { 00:13:42.352 "name": "BaseBdev2", 00:13:42.352 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:42.352 "is_configured": true, 00:13:42.352 "data_offset": 2048, 00:13:42.352 "data_size": 63488 00:13:42.352 }, 00:13:42.352 { 00:13:42.352 "name": "BaseBdev3", 00:13:42.352 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:42.352 "is_configured": true, 00:13:42.352 "data_offset": 2048, 00:13:42.352 "data_size": 63488 00:13:42.352 } 00:13:42.352 ] 00:13:42.352 }' 00:13:42.352 05:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.352 05:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:42.352 05:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.352 05:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:42.352 05:02:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:42.921 [2024-12-14 05:02:53.606976] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:42.921 [2024-12-14 05:02:53.607092] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:42.921 [2024-12-14 05:02:53.607259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.491 "name": "raid_bdev1", 00:13:43.491 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:43.491 "strip_size_kb": 64, 00:13:43.491 "state": "online", 00:13:43.491 "raid_level": "raid5f", 00:13:43.491 "superblock": true, 00:13:43.491 "num_base_bdevs": 3, 00:13:43.491 "num_base_bdevs_discovered": 3, 00:13:43.491 "num_base_bdevs_operational": 3, 00:13:43.491 "base_bdevs_list": [ 00:13:43.491 { 00:13:43.491 "name": "spare", 00:13:43.491 "uuid": "ca26335a-e7ce-58e3-b141-0b1a0f206e2e", 00:13:43.491 "is_configured": true, 00:13:43.491 "data_offset": 2048, 00:13:43.491 "data_size": 63488 00:13:43.491 }, 00:13:43.491 { 00:13:43.491 "name": "BaseBdev2", 00:13:43.491 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:43.491 "is_configured": true, 00:13:43.491 "data_offset": 2048, 00:13:43.491 "data_size": 63488 00:13:43.491 }, 00:13:43.491 { 00:13:43.491 "name": "BaseBdev3", 00:13:43.491 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:43.491 "is_configured": true, 00:13:43.491 "data_offset": 2048, 00:13:43.491 "data_size": 63488 00:13:43.491 } 00:13:43.491 ] 00:13:43.491 }' 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.491 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.751 "name": "raid_bdev1", 00:13:43.751 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:43.751 "strip_size_kb": 64, 00:13:43.751 "state": "online", 00:13:43.751 "raid_level": "raid5f", 00:13:43.751 "superblock": true, 00:13:43.751 "num_base_bdevs": 3, 00:13:43.751 "num_base_bdevs_discovered": 3, 00:13:43.751 "num_base_bdevs_operational": 3, 00:13:43.751 "base_bdevs_list": [ 00:13:43.751 { 00:13:43.751 "name": "spare", 00:13:43.751 "uuid": "ca26335a-e7ce-58e3-b141-0b1a0f206e2e", 00:13:43.751 "is_configured": true, 00:13:43.751 "data_offset": 2048, 00:13:43.751 "data_size": 63488 00:13:43.751 }, 00:13:43.751 { 00:13:43.751 "name": "BaseBdev2", 00:13:43.751 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:43.751 "is_configured": true, 00:13:43.751 "data_offset": 2048, 00:13:43.751 "data_size": 63488 00:13:43.751 }, 00:13:43.751 { 00:13:43.751 "name": "BaseBdev3", 00:13:43.751 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:43.751 "is_configured": true, 00:13:43.751 "data_offset": 2048, 00:13:43.751 "data_size": 63488 00:13:43.751 } 00:13:43.751 ] 00:13:43.751 }' 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.751 "name": "raid_bdev1", 00:13:43.751 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:43.751 "strip_size_kb": 64, 00:13:43.751 "state": "online", 00:13:43.751 "raid_level": "raid5f", 00:13:43.751 "superblock": true, 00:13:43.751 "num_base_bdevs": 3, 00:13:43.751 "num_base_bdevs_discovered": 3, 00:13:43.751 "num_base_bdevs_operational": 3, 00:13:43.751 "base_bdevs_list": [ 00:13:43.751 { 00:13:43.751 "name": "spare", 00:13:43.751 "uuid": "ca26335a-e7ce-58e3-b141-0b1a0f206e2e", 00:13:43.751 "is_configured": true, 00:13:43.751 "data_offset": 2048, 00:13:43.751 "data_size": 63488 00:13:43.751 }, 00:13:43.751 { 00:13:43.751 "name": "BaseBdev2", 00:13:43.751 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:43.751 "is_configured": true, 00:13:43.751 "data_offset": 2048, 00:13:43.751 "data_size": 63488 00:13:43.751 }, 00:13:43.751 { 00:13:43.751 "name": "BaseBdev3", 00:13:43.751 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:43.751 "is_configured": true, 00:13:43.751 "data_offset": 2048, 00:13:43.751 "data_size": 63488 00:13:43.751 } 00:13:43.751 ] 00:13:43.751 }' 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.751 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.321 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:44.321 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.321 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.321 [2024-12-14 05:02:54.938013] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:44.321 [2024-12-14 05:02:54.938088] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:44.322 [2024-12-14 05:02:54.938195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.322 [2024-12-14 05:02:54.938309] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.322 [2024-12-14 05:02:54.938358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:44.322 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.322 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.322 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:44.322 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.322 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.322 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.322 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:44.322 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:44.322 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:44.322 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:44.322 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.322 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:44.322 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:44.322 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:44.322 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:44.322 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:44.322 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:44.322 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:44.322 05:02:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:44.322 /dev/nbd0 00:13:44.582 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:44.582 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.583 1+0 records in 00:13:44.583 1+0 records out 00:13:44.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385627 s, 10.6 MB/s 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:44.583 /dev/nbd1 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:44.583 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.583 1+0 records in 00:13:44.583 1+0 records out 00:13:44.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265779 s, 15.4 MB/s 00:13:44.843 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.843 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:44.843 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.843 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:44.843 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:44.843 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.843 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:44.843 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:44.843 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:44.843 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.843 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:44.843 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:44.843 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:44.843 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.843 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.103 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.103 [2024-12-14 05:02:55.981809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:45.103 [2024-12-14 05:02:55.981870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.103 [2024-12-14 05:02:55.981891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:45.103 [2024-12-14 05:02:55.981902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.363 [2024-12-14 05:02:55.984133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.363 [2024-12-14 05:02:55.984181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:45.363 [2024-12-14 05:02:55.984263] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:45.363 [2024-12-14 05:02:55.984301] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:45.363 [2024-12-14 05:02:55.984418] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:45.363 [2024-12-14 05:02:55.984529] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.363 spare 00:13:45.363 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.363 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:45.363 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.363 05:02:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.363 [2024-12-14 05:02:56.084422] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:13:45.363 [2024-12-14 05:02:56.084446] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:45.363 [2024-12-14 05:02:56.084712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047560 00:13:45.363 [2024-12-14 05:02:56.085115] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:13:45.363 [2024-12-14 05:02:56.085129] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:13:45.363 [2024-12-14 05:02:56.085288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.363 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.363 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:45.363 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.363 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.363 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:45.363 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.363 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.363 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.363 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.363 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.363 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.364 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.364 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.364 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.364 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.364 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.364 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.364 "name": "raid_bdev1", 00:13:45.364 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:45.364 "strip_size_kb": 64, 00:13:45.364 "state": "online", 00:13:45.364 "raid_level": "raid5f", 00:13:45.364 "superblock": true, 00:13:45.364 "num_base_bdevs": 3, 00:13:45.364 "num_base_bdevs_discovered": 3, 00:13:45.364 "num_base_bdevs_operational": 3, 00:13:45.364 "base_bdevs_list": [ 00:13:45.364 { 00:13:45.364 "name": "spare", 00:13:45.364 "uuid": "ca26335a-e7ce-58e3-b141-0b1a0f206e2e", 00:13:45.364 "is_configured": true, 00:13:45.364 "data_offset": 2048, 00:13:45.364 "data_size": 63488 00:13:45.364 }, 00:13:45.364 { 00:13:45.364 "name": "BaseBdev2", 00:13:45.364 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:45.364 "is_configured": true, 00:13:45.364 "data_offset": 2048, 00:13:45.364 "data_size": 63488 00:13:45.364 }, 00:13:45.364 { 00:13:45.364 "name": "BaseBdev3", 00:13:45.364 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:45.364 "is_configured": true, 00:13:45.364 "data_offset": 2048, 00:13:45.364 "data_size": 63488 00:13:45.364 } 00:13:45.364 ] 00:13:45.364 }' 00:13:45.364 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.364 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.623 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:45.623 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.883 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:45.883 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:45.883 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.883 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.883 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.883 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.883 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.883 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.883 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.883 "name": "raid_bdev1", 00:13:45.883 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:45.883 "strip_size_kb": 64, 00:13:45.883 "state": "online", 00:13:45.883 "raid_level": "raid5f", 00:13:45.883 "superblock": true, 00:13:45.883 "num_base_bdevs": 3, 00:13:45.883 "num_base_bdevs_discovered": 3, 00:13:45.883 "num_base_bdevs_operational": 3, 00:13:45.883 "base_bdevs_list": [ 00:13:45.883 { 00:13:45.883 "name": "spare", 00:13:45.883 "uuid": "ca26335a-e7ce-58e3-b141-0b1a0f206e2e", 00:13:45.883 "is_configured": true, 00:13:45.883 "data_offset": 2048, 00:13:45.883 "data_size": 63488 00:13:45.883 }, 00:13:45.883 { 00:13:45.883 "name": "BaseBdev2", 00:13:45.883 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:45.883 "is_configured": true, 00:13:45.883 "data_offset": 2048, 00:13:45.883 "data_size": 63488 00:13:45.883 }, 00:13:45.883 { 00:13:45.883 "name": "BaseBdev3", 00:13:45.883 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:45.883 "is_configured": true, 00:13:45.883 "data_offset": 2048, 00:13:45.883 "data_size": 63488 00:13:45.883 } 00:13:45.883 ] 00:13:45.883 }' 00:13:45.883 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.883 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:45.883 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.883 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:45.883 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.884 [2024-12-14 05:02:56.681513] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.884 "name": "raid_bdev1", 00:13:45.884 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:45.884 "strip_size_kb": 64, 00:13:45.884 "state": "online", 00:13:45.884 "raid_level": "raid5f", 00:13:45.884 "superblock": true, 00:13:45.884 "num_base_bdevs": 3, 00:13:45.884 "num_base_bdevs_discovered": 2, 00:13:45.884 "num_base_bdevs_operational": 2, 00:13:45.884 "base_bdevs_list": [ 00:13:45.884 { 00:13:45.884 "name": null, 00:13:45.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.884 "is_configured": false, 00:13:45.884 "data_offset": 0, 00:13:45.884 "data_size": 63488 00:13:45.884 }, 00:13:45.884 { 00:13:45.884 "name": "BaseBdev2", 00:13:45.884 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:45.884 "is_configured": true, 00:13:45.884 "data_offset": 2048, 00:13:45.884 "data_size": 63488 00:13:45.884 }, 00:13:45.884 { 00:13:45.884 "name": "BaseBdev3", 00:13:45.884 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:45.884 "is_configured": true, 00:13:45.884 "data_offset": 2048, 00:13:45.884 "data_size": 63488 00:13:45.884 } 00:13:45.884 ] 00:13:45.884 }' 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.884 05:02:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.454 05:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:46.454 05:02:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.454 05:02:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.454 [2024-12-14 05:02:57.188665] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:46.454 [2024-12-14 05:02:57.188883] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:46.454 [2024-12-14 05:02:57.188952] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:46.454 [2024-12-14 05:02:57.189029] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:46.454 [2024-12-14 05:02:57.192722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047630 00:13:46.454 05:02:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.454 05:02:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:46.454 [2024-12-14 05:02:57.194812] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:47.394 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.394 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.394 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.394 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.394 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.394 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.394 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.394 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.394 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.394 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.394 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.394 "name": "raid_bdev1", 00:13:47.394 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:47.394 "strip_size_kb": 64, 00:13:47.394 "state": "online", 00:13:47.394 "raid_level": "raid5f", 00:13:47.394 "superblock": true, 00:13:47.394 "num_base_bdevs": 3, 00:13:47.394 "num_base_bdevs_discovered": 3, 00:13:47.394 "num_base_bdevs_operational": 3, 00:13:47.394 "process": { 00:13:47.394 "type": "rebuild", 00:13:47.394 "target": "spare", 00:13:47.394 "progress": { 00:13:47.394 "blocks": 20480, 00:13:47.394 "percent": 16 00:13:47.394 } 00:13:47.394 }, 00:13:47.394 "base_bdevs_list": [ 00:13:47.394 { 00:13:47.394 "name": "spare", 00:13:47.394 "uuid": "ca26335a-e7ce-58e3-b141-0b1a0f206e2e", 00:13:47.394 "is_configured": true, 00:13:47.394 "data_offset": 2048, 00:13:47.394 "data_size": 63488 00:13:47.394 }, 00:13:47.394 { 00:13:47.394 "name": "BaseBdev2", 00:13:47.394 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:47.394 "is_configured": true, 00:13:47.394 "data_offset": 2048, 00:13:47.394 "data_size": 63488 00:13:47.394 }, 00:13:47.394 { 00:13:47.394 "name": "BaseBdev3", 00:13:47.394 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:47.394 "is_configured": true, 00:13:47.394 "data_offset": 2048, 00:13:47.394 "data_size": 63488 00:13:47.394 } 00:13:47.394 ] 00:13:47.394 }' 00:13:47.394 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.654 [2024-12-14 05:02:58.355550] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.654 [2024-12-14 05:02:58.401415] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:47.654 [2024-12-14 05:02:58.401465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.654 [2024-12-14 05:02:58.401481] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.654 [2024-12-14 05:02:58.401487] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.654 "name": "raid_bdev1", 00:13:47.654 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:47.654 "strip_size_kb": 64, 00:13:47.654 "state": "online", 00:13:47.654 "raid_level": "raid5f", 00:13:47.654 "superblock": true, 00:13:47.654 "num_base_bdevs": 3, 00:13:47.654 "num_base_bdevs_discovered": 2, 00:13:47.654 "num_base_bdevs_operational": 2, 00:13:47.654 "base_bdevs_list": [ 00:13:47.654 { 00:13:47.654 "name": null, 00:13:47.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.654 "is_configured": false, 00:13:47.654 "data_offset": 0, 00:13:47.654 "data_size": 63488 00:13:47.654 }, 00:13:47.654 { 00:13:47.654 "name": "BaseBdev2", 00:13:47.654 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:47.654 "is_configured": true, 00:13:47.654 "data_offset": 2048, 00:13:47.654 "data_size": 63488 00:13:47.654 }, 00:13:47.654 { 00:13:47.654 "name": "BaseBdev3", 00:13:47.654 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:47.654 "is_configured": true, 00:13:47.654 "data_offset": 2048, 00:13:47.654 "data_size": 63488 00:13:47.654 } 00:13:47.654 ] 00:13:47.654 }' 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.654 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.224 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:48.224 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.224 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.224 [2024-12-14 05:02:58.849590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:48.224 [2024-12-14 05:02:58.849702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.224 [2024-12-14 05:02:58.849742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:13:48.224 [2024-12-14 05:02:58.849769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.224 [2024-12-14 05:02:58.850230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.224 [2024-12-14 05:02:58.850288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:48.224 [2024-12-14 05:02:58.850396] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:48.224 [2024-12-14 05:02:58.850435] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:48.224 [2024-12-14 05:02:58.850475] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:48.224 [2024-12-14 05:02:58.850533] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:48.224 [2024-12-14 05:02:58.853751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:13:48.224 spare 00:13:48.225 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.225 05:02:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:48.225 [2024-12-14 05:02:58.855837] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:49.204 05:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.204 05:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.204 05:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.204 05:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.204 05:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.204 05:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.204 05:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.204 05:02:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.204 05:02:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.204 05:02:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.204 05:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.204 "name": "raid_bdev1", 00:13:49.204 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:49.204 "strip_size_kb": 64, 00:13:49.204 "state": "online", 00:13:49.204 "raid_level": "raid5f", 00:13:49.204 "superblock": true, 00:13:49.204 "num_base_bdevs": 3, 00:13:49.204 "num_base_bdevs_discovered": 3, 00:13:49.204 "num_base_bdevs_operational": 3, 00:13:49.204 "process": { 00:13:49.204 "type": "rebuild", 00:13:49.204 "target": "spare", 00:13:49.204 "progress": { 00:13:49.204 "blocks": 20480, 00:13:49.204 "percent": 16 00:13:49.204 } 00:13:49.204 }, 00:13:49.204 "base_bdevs_list": [ 00:13:49.204 { 00:13:49.204 "name": "spare", 00:13:49.204 "uuid": "ca26335a-e7ce-58e3-b141-0b1a0f206e2e", 00:13:49.204 "is_configured": true, 00:13:49.204 "data_offset": 2048, 00:13:49.204 "data_size": 63488 00:13:49.204 }, 00:13:49.204 { 00:13:49.204 "name": "BaseBdev2", 00:13:49.204 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:49.204 "is_configured": true, 00:13:49.204 "data_offset": 2048, 00:13:49.204 "data_size": 63488 00:13:49.204 }, 00:13:49.204 { 00:13:49.204 "name": "BaseBdev3", 00:13:49.204 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:49.204 "is_configured": true, 00:13:49.204 "data_offset": 2048, 00:13:49.204 "data_size": 63488 00:13:49.204 } 00:13:49.204 ] 00:13:49.204 }' 00:13:49.204 05:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.204 05:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.204 05:02:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.204 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.204 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:49.204 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.204 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.204 [2024-12-14 05:03:00.016399] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:49.204 [2024-12-14 05:03:00.062218] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:49.204 [2024-12-14 05:03:00.062272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.204 [2024-12-14 05:03:00.062286] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:49.204 [2024-12-14 05:03:00.062296] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:49.204 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.204 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:49.204 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.204 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.204 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.204 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.205 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:49.205 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.205 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.205 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.205 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.476 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.476 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.476 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.476 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.476 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.476 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.476 "name": "raid_bdev1", 00:13:49.476 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:49.476 "strip_size_kb": 64, 00:13:49.476 "state": "online", 00:13:49.476 "raid_level": "raid5f", 00:13:49.476 "superblock": true, 00:13:49.476 "num_base_bdevs": 3, 00:13:49.476 "num_base_bdevs_discovered": 2, 00:13:49.476 "num_base_bdevs_operational": 2, 00:13:49.476 "base_bdevs_list": [ 00:13:49.476 { 00:13:49.476 "name": null, 00:13:49.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.476 "is_configured": false, 00:13:49.476 "data_offset": 0, 00:13:49.476 "data_size": 63488 00:13:49.476 }, 00:13:49.476 { 00:13:49.476 "name": "BaseBdev2", 00:13:49.476 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:49.476 "is_configured": true, 00:13:49.476 "data_offset": 2048, 00:13:49.476 "data_size": 63488 00:13:49.476 }, 00:13:49.476 { 00:13:49.476 "name": "BaseBdev3", 00:13:49.476 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:49.476 "is_configured": true, 00:13:49.476 "data_offset": 2048, 00:13:49.476 "data_size": 63488 00:13:49.476 } 00:13:49.476 ] 00:13:49.476 }' 00:13:49.476 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.476 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.736 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.736 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.736 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.736 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.736 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.736 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.736 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.736 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.736 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.736 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.736 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.736 "name": "raid_bdev1", 00:13:49.736 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:49.736 "strip_size_kb": 64, 00:13:49.736 "state": "online", 00:13:49.736 "raid_level": "raid5f", 00:13:49.736 "superblock": true, 00:13:49.736 "num_base_bdevs": 3, 00:13:49.736 "num_base_bdevs_discovered": 2, 00:13:49.736 "num_base_bdevs_operational": 2, 00:13:49.736 "base_bdevs_list": [ 00:13:49.736 { 00:13:49.736 "name": null, 00:13:49.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.736 "is_configured": false, 00:13:49.736 "data_offset": 0, 00:13:49.736 "data_size": 63488 00:13:49.736 }, 00:13:49.736 { 00:13:49.736 "name": "BaseBdev2", 00:13:49.736 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:49.736 "is_configured": true, 00:13:49.736 "data_offset": 2048, 00:13:49.736 "data_size": 63488 00:13:49.736 }, 00:13:49.736 { 00:13:49.736 "name": "BaseBdev3", 00:13:49.736 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:49.736 "is_configured": true, 00:13:49.736 "data_offset": 2048, 00:13:49.736 "data_size": 63488 00:13:49.736 } 00:13:49.736 ] 00:13:49.736 }' 00:13:49.736 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.736 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.736 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.996 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.996 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:49.996 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.996 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.996 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.996 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:49.996 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.996 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.996 [2024-12-14 05:03:00.670152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:49.996 [2024-12-14 05:03:00.670216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.996 [2024-12-14 05:03:00.670236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:49.996 [2024-12-14 05:03:00.670247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.996 [2024-12-14 05:03:00.670623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.996 [2024-12-14 05:03:00.670643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:49.996 [2024-12-14 05:03:00.670708] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:49.996 [2024-12-14 05:03:00.670724] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:49.996 [2024-12-14 05:03:00.670732] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:49.996 [2024-12-14 05:03:00.670743] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:49.996 BaseBdev1 00:13:49.996 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.996 05:03:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:50.935 05:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:50.935 05:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.935 05:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.935 05:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.935 05:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.935 05:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:50.935 05:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.935 05:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.935 05:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.935 05:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.935 05:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.935 05:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.935 05:03:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.935 05:03:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 05:03:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.935 05:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.935 "name": "raid_bdev1", 00:13:50.935 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:50.935 "strip_size_kb": 64, 00:13:50.935 "state": "online", 00:13:50.935 "raid_level": "raid5f", 00:13:50.935 "superblock": true, 00:13:50.935 "num_base_bdevs": 3, 00:13:50.935 "num_base_bdevs_discovered": 2, 00:13:50.935 "num_base_bdevs_operational": 2, 00:13:50.935 "base_bdevs_list": [ 00:13:50.935 { 00:13:50.935 "name": null, 00:13:50.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.935 "is_configured": false, 00:13:50.935 "data_offset": 0, 00:13:50.935 "data_size": 63488 00:13:50.935 }, 00:13:50.935 { 00:13:50.935 "name": "BaseBdev2", 00:13:50.935 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:50.935 "is_configured": true, 00:13:50.935 "data_offset": 2048, 00:13:50.935 "data_size": 63488 00:13:50.935 }, 00:13:50.935 { 00:13:50.935 "name": "BaseBdev3", 00:13:50.935 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:50.935 "is_configured": true, 00:13:50.935 "data_offset": 2048, 00:13:50.935 "data_size": 63488 00:13:50.935 } 00:13:50.935 ] 00:13:50.935 }' 00:13:50.935 05:03:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.935 05:03:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.505 "name": "raid_bdev1", 00:13:51.505 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:51.505 "strip_size_kb": 64, 00:13:51.505 "state": "online", 00:13:51.505 "raid_level": "raid5f", 00:13:51.505 "superblock": true, 00:13:51.505 "num_base_bdevs": 3, 00:13:51.505 "num_base_bdevs_discovered": 2, 00:13:51.505 "num_base_bdevs_operational": 2, 00:13:51.505 "base_bdevs_list": [ 00:13:51.505 { 00:13:51.505 "name": null, 00:13:51.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.505 "is_configured": false, 00:13:51.505 "data_offset": 0, 00:13:51.505 "data_size": 63488 00:13:51.505 }, 00:13:51.505 { 00:13:51.505 "name": "BaseBdev2", 00:13:51.505 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:51.505 "is_configured": true, 00:13:51.505 "data_offset": 2048, 00:13:51.505 "data_size": 63488 00:13:51.505 }, 00:13:51.505 { 00:13:51.505 "name": "BaseBdev3", 00:13:51.505 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:51.505 "is_configured": true, 00:13:51.505 "data_offset": 2048, 00:13:51.505 "data_size": 63488 00:13:51.505 } 00:13:51.505 ] 00:13:51.505 }' 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.505 [2024-12-14 05:03:02.271437] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.505 [2024-12-14 05:03:02.271659] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:51.505 [2024-12-14 05:03:02.271677] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:51.505 request: 00:13:51.505 { 00:13:51.505 "base_bdev": "BaseBdev1", 00:13:51.505 "raid_bdev": "raid_bdev1", 00:13:51.505 "method": "bdev_raid_add_base_bdev", 00:13:51.505 "req_id": 1 00:13:51.505 } 00:13:51.505 Got JSON-RPC error response 00:13:51.505 response: 00:13:51.505 { 00:13:51.505 "code": -22, 00:13:51.505 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:51.505 } 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:51.505 05:03:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:52.444 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:52.444 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.444 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.444 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.444 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.444 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:52.444 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.444 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.444 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.444 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.444 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.444 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.444 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.444 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.444 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.703 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.703 "name": "raid_bdev1", 00:13:52.703 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:52.703 "strip_size_kb": 64, 00:13:52.703 "state": "online", 00:13:52.703 "raid_level": "raid5f", 00:13:52.703 "superblock": true, 00:13:52.703 "num_base_bdevs": 3, 00:13:52.703 "num_base_bdevs_discovered": 2, 00:13:52.703 "num_base_bdevs_operational": 2, 00:13:52.703 "base_bdevs_list": [ 00:13:52.703 { 00:13:52.703 "name": null, 00:13:52.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.703 "is_configured": false, 00:13:52.703 "data_offset": 0, 00:13:52.703 "data_size": 63488 00:13:52.703 }, 00:13:52.703 { 00:13:52.703 "name": "BaseBdev2", 00:13:52.703 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:52.703 "is_configured": true, 00:13:52.703 "data_offset": 2048, 00:13:52.703 "data_size": 63488 00:13:52.703 }, 00:13:52.703 { 00:13:52.703 "name": "BaseBdev3", 00:13:52.703 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:52.703 "is_configured": true, 00:13:52.703 "data_offset": 2048, 00:13:52.703 "data_size": 63488 00:13:52.703 } 00:13:52.703 ] 00:13:52.703 }' 00:13:52.704 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.704 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.964 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:52.964 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.964 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:52.964 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:52.964 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.964 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.964 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.964 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.964 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.964 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.964 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.964 "name": "raid_bdev1", 00:13:52.964 "uuid": "9c818957-5cdb-49a0-b889-284244afeb02", 00:13:52.964 "strip_size_kb": 64, 00:13:52.964 "state": "online", 00:13:52.964 "raid_level": "raid5f", 00:13:52.964 "superblock": true, 00:13:52.964 "num_base_bdevs": 3, 00:13:52.964 "num_base_bdevs_discovered": 2, 00:13:52.964 "num_base_bdevs_operational": 2, 00:13:52.964 "base_bdevs_list": [ 00:13:52.964 { 00:13:52.964 "name": null, 00:13:52.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.964 "is_configured": false, 00:13:52.964 "data_offset": 0, 00:13:52.964 "data_size": 63488 00:13:52.964 }, 00:13:52.964 { 00:13:52.964 "name": "BaseBdev2", 00:13:52.964 "uuid": "3321156a-c10f-50ad-b198-e35670a12c0d", 00:13:52.964 "is_configured": true, 00:13:52.964 "data_offset": 2048, 00:13:52.964 "data_size": 63488 00:13:52.964 }, 00:13:52.964 { 00:13:52.964 "name": "BaseBdev3", 00:13:52.964 "uuid": "b1c0a9d2-68f4-5bde-ab56-19087825e47f", 00:13:52.964 "is_configured": true, 00:13:52.964 "data_offset": 2048, 00:13:52.964 "data_size": 63488 00:13:52.964 } 00:13:52.964 ] 00:13:52.964 }' 00:13:52.964 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.964 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:52.964 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.224 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:53.224 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 92525 00:13:53.224 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 92525 ']' 00:13:53.224 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 92525 00:13:53.224 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:53.224 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:53.224 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92525 00:13:53.224 killing process with pid 92525 00:13:53.224 Received shutdown signal, test time was about 60.000000 seconds 00:13:53.224 00:13:53.224 Latency(us) 00:13:53.224 [2024-12-14T05:03:04.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.224 [2024-12-14T05:03:04.107Z] =================================================================================================================== 00:13:53.224 [2024-12-14T05:03:04.107Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:53.224 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:53.224 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:53.224 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92525' 00:13:53.224 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 92525 00:13:53.224 [2024-12-14 05:03:03.926501] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:53.224 [2024-12-14 05:03:03.926610] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.224 05:03:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 92525 00:13:53.224 [2024-12-14 05:03:03.926670] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.224 [2024-12-14 05:03:03.926680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:13:53.224 [2024-12-14 05:03:03.968027] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:53.484 05:03:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:53.484 00:13:53.484 real 0m21.474s 00:13:53.484 user 0m27.934s 00:13:53.484 sys 0m2.734s 00:13:53.484 05:03:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:53.484 05:03:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.484 ************************************ 00:13:53.484 END TEST raid5f_rebuild_test_sb 00:13:53.484 ************************************ 00:13:53.484 05:03:04 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:53.484 05:03:04 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:13:53.484 05:03:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:53.484 05:03:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:53.484 05:03:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:53.484 ************************************ 00:13:53.484 START TEST raid5f_state_function_test 00:13:53.484 ************************************ 00:13:53.484 05:03:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:13:53.484 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=93258 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93258' 00:13:53.485 Process raid pid: 93258 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 93258 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 93258 ']' 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:53.485 05:03:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.745 [2024-12-14 05:03:04.381785] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:53.745 [2024-12-14 05:03:04.382030] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.745 [2024-12-14 05:03:04.544918] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.745 [2024-12-14 05:03:04.592845] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.004 [2024-12-14 05:03:04.636192] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.004 [2024-12-14 05:03:04.636304] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.573 [2024-12-14 05:03:05.214217] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:54.573 [2024-12-14 05:03:05.214320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:54.573 [2024-12-14 05:03:05.214352] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:54.573 [2024-12-14 05:03:05.214375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:54.573 [2024-12-14 05:03:05.214392] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:54.573 [2024-12-14 05:03:05.214415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:54.573 [2024-12-14 05:03:05.214448] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:54.573 [2024-12-14 05:03:05.214468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.573 "name": "Existed_Raid", 00:13:54.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.573 "strip_size_kb": 64, 00:13:54.573 "state": "configuring", 00:13:54.573 "raid_level": "raid5f", 00:13:54.573 "superblock": false, 00:13:54.573 "num_base_bdevs": 4, 00:13:54.573 "num_base_bdevs_discovered": 0, 00:13:54.573 "num_base_bdevs_operational": 4, 00:13:54.573 "base_bdevs_list": [ 00:13:54.573 { 00:13:54.573 "name": "BaseBdev1", 00:13:54.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.573 "is_configured": false, 00:13:54.573 "data_offset": 0, 00:13:54.573 "data_size": 0 00:13:54.573 }, 00:13:54.573 { 00:13:54.573 "name": "BaseBdev2", 00:13:54.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.573 "is_configured": false, 00:13:54.573 "data_offset": 0, 00:13:54.573 "data_size": 0 00:13:54.573 }, 00:13:54.573 { 00:13:54.573 "name": "BaseBdev3", 00:13:54.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.573 "is_configured": false, 00:13:54.573 "data_offset": 0, 00:13:54.573 "data_size": 0 00:13:54.573 }, 00:13:54.573 { 00:13:54.573 "name": "BaseBdev4", 00:13:54.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.573 "is_configured": false, 00:13:54.573 "data_offset": 0, 00:13:54.573 "data_size": 0 00:13:54.573 } 00:13:54.573 ] 00:13:54.573 }' 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.573 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.833 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:54.833 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.833 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.833 [2024-12-14 05:03:05.693259] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:54.833 [2024-12-14 05:03:05.693355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:54.833 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.833 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:54.833 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.833 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.833 [2024-12-14 05:03:05.701298] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:54.833 [2024-12-14 05:03:05.701337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:54.833 [2024-12-14 05:03:05.701345] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:54.833 [2024-12-14 05:03:05.701354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:54.833 [2024-12-14 05:03:05.701359] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:54.833 [2024-12-14 05:03:05.701368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:54.833 [2024-12-14 05:03:05.701373] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:54.833 [2024-12-14 05:03:05.701382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:54.833 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.833 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:54.833 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.833 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.093 [2024-12-14 05:03:05.718324] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.093 BaseBdev1 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.093 [ 00:13:55.093 { 00:13:55.093 "name": "BaseBdev1", 00:13:55.093 "aliases": [ 00:13:55.093 "5a87e417-d745-4825-bd1d-eb56024e8d34" 00:13:55.093 ], 00:13:55.093 "product_name": "Malloc disk", 00:13:55.093 "block_size": 512, 00:13:55.093 "num_blocks": 65536, 00:13:55.093 "uuid": "5a87e417-d745-4825-bd1d-eb56024e8d34", 00:13:55.093 "assigned_rate_limits": { 00:13:55.093 "rw_ios_per_sec": 0, 00:13:55.093 "rw_mbytes_per_sec": 0, 00:13:55.093 "r_mbytes_per_sec": 0, 00:13:55.093 "w_mbytes_per_sec": 0 00:13:55.093 }, 00:13:55.093 "claimed": true, 00:13:55.093 "claim_type": "exclusive_write", 00:13:55.093 "zoned": false, 00:13:55.093 "supported_io_types": { 00:13:55.093 "read": true, 00:13:55.093 "write": true, 00:13:55.093 "unmap": true, 00:13:55.093 "flush": true, 00:13:55.093 "reset": true, 00:13:55.093 "nvme_admin": false, 00:13:55.093 "nvme_io": false, 00:13:55.093 "nvme_io_md": false, 00:13:55.093 "write_zeroes": true, 00:13:55.093 "zcopy": true, 00:13:55.093 "get_zone_info": false, 00:13:55.093 "zone_management": false, 00:13:55.093 "zone_append": false, 00:13:55.093 "compare": false, 00:13:55.093 "compare_and_write": false, 00:13:55.093 "abort": true, 00:13:55.093 "seek_hole": false, 00:13:55.093 "seek_data": false, 00:13:55.093 "copy": true, 00:13:55.093 "nvme_iov_md": false 00:13:55.093 }, 00:13:55.093 "memory_domains": [ 00:13:55.093 { 00:13:55.093 "dma_device_id": "system", 00:13:55.093 "dma_device_type": 1 00:13:55.093 }, 00:13:55.093 { 00:13:55.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.093 "dma_device_type": 2 00:13:55.093 } 00:13:55.093 ], 00:13:55.093 "driver_specific": {} 00:13:55.093 } 00:13:55.093 ] 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.093 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.093 "name": "Existed_Raid", 00:13:55.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.094 "strip_size_kb": 64, 00:13:55.094 "state": "configuring", 00:13:55.094 "raid_level": "raid5f", 00:13:55.094 "superblock": false, 00:13:55.094 "num_base_bdevs": 4, 00:13:55.094 "num_base_bdevs_discovered": 1, 00:13:55.094 "num_base_bdevs_operational": 4, 00:13:55.094 "base_bdevs_list": [ 00:13:55.094 { 00:13:55.094 "name": "BaseBdev1", 00:13:55.094 "uuid": "5a87e417-d745-4825-bd1d-eb56024e8d34", 00:13:55.094 "is_configured": true, 00:13:55.094 "data_offset": 0, 00:13:55.094 "data_size": 65536 00:13:55.094 }, 00:13:55.094 { 00:13:55.094 "name": "BaseBdev2", 00:13:55.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.094 "is_configured": false, 00:13:55.094 "data_offset": 0, 00:13:55.094 "data_size": 0 00:13:55.094 }, 00:13:55.094 { 00:13:55.094 "name": "BaseBdev3", 00:13:55.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.094 "is_configured": false, 00:13:55.094 "data_offset": 0, 00:13:55.094 "data_size": 0 00:13:55.094 }, 00:13:55.094 { 00:13:55.094 "name": "BaseBdev4", 00:13:55.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.094 "is_configured": false, 00:13:55.094 "data_offset": 0, 00:13:55.094 "data_size": 0 00:13:55.094 } 00:13:55.094 ] 00:13:55.094 }' 00:13:55.094 05:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.094 05:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.354 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:55.354 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.354 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.354 [2024-12-14 05:03:06.225508] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:55.354 [2024-12-14 05:03:06.225632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:55.354 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.354 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:55.354 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.354 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.614 [2024-12-14 05:03:06.237519] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.614 [2024-12-14 05:03:06.239385] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:55.614 [2024-12-14 05:03:06.239426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:55.614 [2024-12-14 05:03:06.239434] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:55.614 [2024-12-14 05:03:06.239443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:55.614 [2024-12-14 05:03:06.239449] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:55.614 [2024-12-14 05:03:06.239458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:55.614 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.614 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:55.614 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:55.614 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:55.614 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.614 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.614 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.614 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.614 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.614 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.614 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.614 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.614 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.614 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.614 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.614 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.614 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.614 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.614 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.614 "name": "Existed_Raid", 00:13:55.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.614 "strip_size_kb": 64, 00:13:55.614 "state": "configuring", 00:13:55.614 "raid_level": "raid5f", 00:13:55.614 "superblock": false, 00:13:55.614 "num_base_bdevs": 4, 00:13:55.614 "num_base_bdevs_discovered": 1, 00:13:55.614 "num_base_bdevs_operational": 4, 00:13:55.614 "base_bdevs_list": [ 00:13:55.614 { 00:13:55.614 "name": "BaseBdev1", 00:13:55.614 "uuid": "5a87e417-d745-4825-bd1d-eb56024e8d34", 00:13:55.614 "is_configured": true, 00:13:55.614 "data_offset": 0, 00:13:55.614 "data_size": 65536 00:13:55.614 }, 00:13:55.614 { 00:13:55.614 "name": "BaseBdev2", 00:13:55.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.614 "is_configured": false, 00:13:55.614 "data_offset": 0, 00:13:55.614 "data_size": 0 00:13:55.614 }, 00:13:55.614 { 00:13:55.614 "name": "BaseBdev3", 00:13:55.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.614 "is_configured": false, 00:13:55.614 "data_offset": 0, 00:13:55.614 "data_size": 0 00:13:55.614 }, 00:13:55.614 { 00:13:55.614 "name": "BaseBdev4", 00:13:55.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.614 "is_configured": false, 00:13:55.614 "data_offset": 0, 00:13:55.614 "data_size": 0 00:13:55.614 } 00:13:55.614 ] 00:13:55.614 }' 00:13:55.614 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.614 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.874 [2024-12-14 05:03:06.707772] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.874 BaseBdev2 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.874 [ 00:13:55.874 { 00:13:55.874 "name": "BaseBdev2", 00:13:55.874 "aliases": [ 00:13:55.874 "190e9cae-481a-4b2b-b42e-0d59c8906e25" 00:13:55.874 ], 00:13:55.874 "product_name": "Malloc disk", 00:13:55.874 "block_size": 512, 00:13:55.874 "num_blocks": 65536, 00:13:55.874 "uuid": "190e9cae-481a-4b2b-b42e-0d59c8906e25", 00:13:55.874 "assigned_rate_limits": { 00:13:55.874 "rw_ios_per_sec": 0, 00:13:55.874 "rw_mbytes_per_sec": 0, 00:13:55.874 "r_mbytes_per_sec": 0, 00:13:55.874 "w_mbytes_per_sec": 0 00:13:55.874 }, 00:13:55.874 "claimed": true, 00:13:55.874 "claim_type": "exclusive_write", 00:13:55.874 "zoned": false, 00:13:55.874 "supported_io_types": { 00:13:55.874 "read": true, 00:13:55.874 "write": true, 00:13:55.874 "unmap": true, 00:13:55.874 "flush": true, 00:13:55.874 "reset": true, 00:13:55.874 "nvme_admin": false, 00:13:55.874 "nvme_io": false, 00:13:55.874 "nvme_io_md": false, 00:13:55.874 "write_zeroes": true, 00:13:55.874 "zcopy": true, 00:13:55.874 "get_zone_info": false, 00:13:55.874 "zone_management": false, 00:13:55.874 "zone_append": false, 00:13:55.874 "compare": false, 00:13:55.874 "compare_and_write": false, 00:13:55.874 "abort": true, 00:13:55.874 "seek_hole": false, 00:13:55.874 "seek_data": false, 00:13:55.874 "copy": true, 00:13:55.874 "nvme_iov_md": false 00:13:55.874 }, 00:13:55.874 "memory_domains": [ 00:13:55.874 { 00:13:55.874 "dma_device_id": "system", 00:13:55.874 "dma_device_type": 1 00:13:55.874 }, 00:13:55.874 { 00:13:55.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.874 "dma_device_type": 2 00:13:55.874 } 00:13:55.874 ], 00:13:55.874 "driver_specific": {} 00:13:55.874 } 00:13:55.874 ] 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.874 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.134 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.134 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.134 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.134 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.134 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.134 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.134 "name": "Existed_Raid", 00:13:56.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.134 "strip_size_kb": 64, 00:13:56.134 "state": "configuring", 00:13:56.134 "raid_level": "raid5f", 00:13:56.134 "superblock": false, 00:13:56.134 "num_base_bdevs": 4, 00:13:56.134 "num_base_bdevs_discovered": 2, 00:13:56.134 "num_base_bdevs_operational": 4, 00:13:56.134 "base_bdevs_list": [ 00:13:56.134 { 00:13:56.134 "name": "BaseBdev1", 00:13:56.134 "uuid": "5a87e417-d745-4825-bd1d-eb56024e8d34", 00:13:56.134 "is_configured": true, 00:13:56.134 "data_offset": 0, 00:13:56.134 "data_size": 65536 00:13:56.134 }, 00:13:56.134 { 00:13:56.134 "name": "BaseBdev2", 00:13:56.134 "uuid": "190e9cae-481a-4b2b-b42e-0d59c8906e25", 00:13:56.134 "is_configured": true, 00:13:56.134 "data_offset": 0, 00:13:56.134 "data_size": 65536 00:13:56.134 }, 00:13:56.134 { 00:13:56.134 "name": "BaseBdev3", 00:13:56.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.134 "is_configured": false, 00:13:56.134 "data_offset": 0, 00:13:56.134 "data_size": 0 00:13:56.134 }, 00:13:56.134 { 00:13:56.134 "name": "BaseBdev4", 00:13:56.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.134 "is_configured": false, 00:13:56.134 "data_offset": 0, 00:13:56.134 "data_size": 0 00:13:56.134 } 00:13:56.134 ] 00:13:56.134 }' 00:13:56.134 05:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.134 05:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.394 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:56.394 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.394 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.394 [2024-12-14 05:03:07.213966] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:56.394 BaseBdev3 00:13:56.394 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.394 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:56.394 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:56.394 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:56.394 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:56.394 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:56.394 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:56.394 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:56.394 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.394 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.394 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.394 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:56.394 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.394 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.394 [ 00:13:56.394 { 00:13:56.394 "name": "BaseBdev3", 00:13:56.395 "aliases": [ 00:13:56.395 "fcfe747b-2fb3-4eb1-b936-b481f3150bbe" 00:13:56.395 ], 00:13:56.395 "product_name": "Malloc disk", 00:13:56.395 "block_size": 512, 00:13:56.395 "num_blocks": 65536, 00:13:56.395 "uuid": "fcfe747b-2fb3-4eb1-b936-b481f3150bbe", 00:13:56.395 "assigned_rate_limits": { 00:13:56.395 "rw_ios_per_sec": 0, 00:13:56.395 "rw_mbytes_per_sec": 0, 00:13:56.395 "r_mbytes_per_sec": 0, 00:13:56.395 "w_mbytes_per_sec": 0 00:13:56.395 }, 00:13:56.395 "claimed": true, 00:13:56.395 "claim_type": "exclusive_write", 00:13:56.395 "zoned": false, 00:13:56.395 "supported_io_types": { 00:13:56.395 "read": true, 00:13:56.395 "write": true, 00:13:56.395 "unmap": true, 00:13:56.395 "flush": true, 00:13:56.395 "reset": true, 00:13:56.395 "nvme_admin": false, 00:13:56.395 "nvme_io": false, 00:13:56.395 "nvme_io_md": false, 00:13:56.395 "write_zeroes": true, 00:13:56.395 "zcopy": true, 00:13:56.395 "get_zone_info": false, 00:13:56.395 "zone_management": false, 00:13:56.395 "zone_append": false, 00:13:56.395 "compare": false, 00:13:56.395 "compare_and_write": false, 00:13:56.395 "abort": true, 00:13:56.395 "seek_hole": false, 00:13:56.395 "seek_data": false, 00:13:56.395 "copy": true, 00:13:56.395 "nvme_iov_md": false 00:13:56.395 }, 00:13:56.395 "memory_domains": [ 00:13:56.395 { 00:13:56.395 "dma_device_id": "system", 00:13:56.395 "dma_device_type": 1 00:13:56.395 }, 00:13:56.395 { 00:13:56.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.395 "dma_device_type": 2 00:13:56.395 } 00:13:56.395 ], 00:13:56.395 "driver_specific": {} 00:13:56.395 } 00:13:56.395 ] 00:13:56.395 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.395 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:56.395 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:56.395 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:56.395 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:56.395 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.395 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.395 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.395 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.395 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.395 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.395 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.395 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.395 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.395 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.395 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.395 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.395 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.395 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.655 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.655 "name": "Existed_Raid", 00:13:56.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.655 "strip_size_kb": 64, 00:13:56.655 "state": "configuring", 00:13:56.655 "raid_level": "raid5f", 00:13:56.655 "superblock": false, 00:13:56.655 "num_base_bdevs": 4, 00:13:56.655 "num_base_bdevs_discovered": 3, 00:13:56.655 "num_base_bdevs_operational": 4, 00:13:56.655 "base_bdevs_list": [ 00:13:56.655 { 00:13:56.655 "name": "BaseBdev1", 00:13:56.655 "uuid": "5a87e417-d745-4825-bd1d-eb56024e8d34", 00:13:56.655 "is_configured": true, 00:13:56.655 "data_offset": 0, 00:13:56.655 "data_size": 65536 00:13:56.655 }, 00:13:56.655 { 00:13:56.655 "name": "BaseBdev2", 00:13:56.655 "uuid": "190e9cae-481a-4b2b-b42e-0d59c8906e25", 00:13:56.655 "is_configured": true, 00:13:56.655 "data_offset": 0, 00:13:56.655 "data_size": 65536 00:13:56.655 }, 00:13:56.655 { 00:13:56.655 "name": "BaseBdev3", 00:13:56.655 "uuid": "fcfe747b-2fb3-4eb1-b936-b481f3150bbe", 00:13:56.655 "is_configured": true, 00:13:56.655 "data_offset": 0, 00:13:56.655 "data_size": 65536 00:13:56.655 }, 00:13:56.655 { 00:13:56.655 "name": "BaseBdev4", 00:13:56.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.655 "is_configured": false, 00:13:56.655 "data_offset": 0, 00:13:56.655 "data_size": 0 00:13:56.655 } 00:13:56.655 ] 00:13:56.655 }' 00:13:56.655 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.655 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.915 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:56.915 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.915 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.915 [2024-12-14 05:03:07.668269] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:56.915 [2024-12-14 05:03:07.668417] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:56.915 [2024-12-14 05:03:07.668444] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:56.915 [2024-12-14 05:03:07.668741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:56.915 [2024-12-14 05:03:07.669284] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:56.915 [2024-12-14 05:03:07.669339] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:56.915 [2024-12-14 05:03:07.669620] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.915 BaseBdev4 00:13:56.915 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.915 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:56.915 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:56.915 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:56.915 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:56.915 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:56.915 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:56.915 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:56.915 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.915 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.915 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.916 [ 00:13:56.916 { 00:13:56.916 "name": "BaseBdev4", 00:13:56.916 "aliases": [ 00:13:56.916 "48d793c0-5ee9-49a9-89d9-487c3d59c4ea" 00:13:56.916 ], 00:13:56.916 "product_name": "Malloc disk", 00:13:56.916 "block_size": 512, 00:13:56.916 "num_blocks": 65536, 00:13:56.916 "uuid": "48d793c0-5ee9-49a9-89d9-487c3d59c4ea", 00:13:56.916 "assigned_rate_limits": { 00:13:56.916 "rw_ios_per_sec": 0, 00:13:56.916 "rw_mbytes_per_sec": 0, 00:13:56.916 "r_mbytes_per_sec": 0, 00:13:56.916 "w_mbytes_per_sec": 0 00:13:56.916 }, 00:13:56.916 "claimed": true, 00:13:56.916 "claim_type": "exclusive_write", 00:13:56.916 "zoned": false, 00:13:56.916 "supported_io_types": { 00:13:56.916 "read": true, 00:13:56.916 "write": true, 00:13:56.916 "unmap": true, 00:13:56.916 "flush": true, 00:13:56.916 "reset": true, 00:13:56.916 "nvme_admin": false, 00:13:56.916 "nvme_io": false, 00:13:56.916 "nvme_io_md": false, 00:13:56.916 "write_zeroes": true, 00:13:56.916 "zcopy": true, 00:13:56.916 "get_zone_info": false, 00:13:56.916 "zone_management": false, 00:13:56.916 "zone_append": false, 00:13:56.916 "compare": false, 00:13:56.916 "compare_and_write": false, 00:13:56.916 "abort": true, 00:13:56.916 "seek_hole": false, 00:13:56.916 "seek_data": false, 00:13:56.916 "copy": true, 00:13:56.916 "nvme_iov_md": false 00:13:56.916 }, 00:13:56.916 "memory_domains": [ 00:13:56.916 { 00:13:56.916 "dma_device_id": "system", 00:13:56.916 "dma_device_type": 1 00:13:56.916 }, 00:13:56.916 { 00:13:56.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.916 "dma_device_type": 2 00:13:56.916 } 00:13:56.916 ], 00:13:56.916 "driver_specific": {} 00:13:56.916 } 00:13:56.916 ] 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.916 "name": "Existed_Raid", 00:13:56.916 "uuid": "771e1dd1-af89-401d-b8f4-ebaeb9636063", 00:13:56.916 "strip_size_kb": 64, 00:13:56.916 "state": "online", 00:13:56.916 "raid_level": "raid5f", 00:13:56.916 "superblock": false, 00:13:56.916 "num_base_bdevs": 4, 00:13:56.916 "num_base_bdevs_discovered": 4, 00:13:56.916 "num_base_bdevs_operational": 4, 00:13:56.916 "base_bdevs_list": [ 00:13:56.916 { 00:13:56.916 "name": "BaseBdev1", 00:13:56.916 "uuid": "5a87e417-d745-4825-bd1d-eb56024e8d34", 00:13:56.916 "is_configured": true, 00:13:56.916 "data_offset": 0, 00:13:56.916 "data_size": 65536 00:13:56.916 }, 00:13:56.916 { 00:13:56.916 "name": "BaseBdev2", 00:13:56.916 "uuid": "190e9cae-481a-4b2b-b42e-0d59c8906e25", 00:13:56.916 "is_configured": true, 00:13:56.916 "data_offset": 0, 00:13:56.916 "data_size": 65536 00:13:56.916 }, 00:13:56.916 { 00:13:56.916 "name": "BaseBdev3", 00:13:56.916 "uuid": "fcfe747b-2fb3-4eb1-b936-b481f3150bbe", 00:13:56.916 "is_configured": true, 00:13:56.916 "data_offset": 0, 00:13:56.916 "data_size": 65536 00:13:56.916 }, 00:13:56.916 { 00:13:56.916 "name": "BaseBdev4", 00:13:56.916 "uuid": "48d793c0-5ee9-49a9-89d9-487c3d59c4ea", 00:13:56.916 "is_configured": true, 00:13:56.916 "data_offset": 0, 00:13:56.916 "data_size": 65536 00:13:56.916 } 00:13:56.916 ] 00:13:56.916 }' 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.916 05:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.486 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:57.486 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:57.486 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:57.486 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:57.486 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:57.486 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:57.486 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:57.486 05:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.486 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:57.486 05:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.486 [2024-12-14 05:03:08.187631] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:57.486 05:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.486 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:57.486 "name": "Existed_Raid", 00:13:57.486 "aliases": [ 00:13:57.486 "771e1dd1-af89-401d-b8f4-ebaeb9636063" 00:13:57.486 ], 00:13:57.486 "product_name": "Raid Volume", 00:13:57.486 "block_size": 512, 00:13:57.486 "num_blocks": 196608, 00:13:57.486 "uuid": "771e1dd1-af89-401d-b8f4-ebaeb9636063", 00:13:57.486 "assigned_rate_limits": { 00:13:57.486 "rw_ios_per_sec": 0, 00:13:57.486 "rw_mbytes_per_sec": 0, 00:13:57.486 "r_mbytes_per_sec": 0, 00:13:57.486 "w_mbytes_per_sec": 0 00:13:57.486 }, 00:13:57.486 "claimed": false, 00:13:57.486 "zoned": false, 00:13:57.486 "supported_io_types": { 00:13:57.486 "read": true, 00:13:57.486 "write": true, 00:13:57.486 "unmap": false, 00:13:57.486 "flush": false, 00:13:57.486 "reset": true, 00:13:57.486 "nvme_admin": false, 00:13:57.486 "nvme_io": false, 00:13:57.486 "nvme_io_md": false, 00:13:57.486 "write_zeroes": true, 00:13:57.486 "zcopy": false, 00:13:57.486 "get_zone_info": false, 00:13:57.486 "zone_management": false, 00:13:57.486 "zone_append": false, 00:13:57.486 "compare": false, 00:13:57.486 "compare_and_write": false, 00:13:57.486 "abort": false, 00:13:57.486 "seek_hole": false, 00:13:57.486 "seek_data": false, 00:13:57.486 "copy": false, 00:13:57.486 "nvme_iov_md": false 00:13:57.486 }, 00:13:57.486 "driver_specific": { 00:13:57.486 "raid": { 00:13:57.486 "uuid": "771e1dd1-af89-401d-b8f4-ebaeb9636063", 00:13:57.486 "strip_size_kb": 64, 00:13:57.486 "state": "online", 00:13:57.486 "raid_level": "raid5f", 00:13:57.486 "superblock": false, 00:13:57.486 "num_base_bdevs": 4, 00:13:57.486 "num_base_bdevs_discovered": 4, 00:13:57.486 "num_base_bdevs_operational": 4, 00:13:57.486 "base_bdevs_list": [ 00:13:57.486 { 00:13:57.486 "name": "BaseBdev1", 00:13:57.486 "uuid": "5a87e417-d745-4825-bd1d-eb56024e8d34", 00:13:57.486 "is_configured": true, 00:13:57.486 "data_offset": 0, 00:13:57.486 "data_size": 65536 00:13:57.486 }, 00:13:57.486 { 00:13:57.486 "name": "BaseBdev2", 00:13:57.486 "uuid": "190e9cae-481a-4b2b-b42e-0d59c8906e25", 00:13:57.486 "is_configured": true, 00:13:57.486 "data_offset": 0, 00:13:57.486 "data_size": 65536 00:13:57.486 }, 00:13:57.486 { 00:13:57.486 "name": "BaseBdev3", 00:13:57.486 "uuid": "fcfe747b-2fb3-4eb1-b936-b481f3150bbe", 00:13:57.486 "is_configured": true, 00:13:57.486 "data_offset": 0, 00:13:57.486 "data_size": 65536 00:13:57.486 }, 00:13:57.486 { 00:13:57.486 "name": "BaseBdev4", 00:13:57.486 "uuid": "48d793c0-5ee9-49a9-89d9-487c3d59c4ea", 00:13:57.486 "is_configured": true, 00:13:57.486 "data_offset": 0, 00:13:57.486 "data_size": 65536 00:13:57.486 } 00:13:57.486 ] 00:13:57.486 } 00:13:57.486 } 00:13:57.486 }' 00:13:57.486 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:57.486 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:57.486 BaseBdev2 00:13:57.486 BaseBdev3 00:13:57.486 BaseBdev4' 00:13:57.486 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.486 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:57.486 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.486 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:57.486 05:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.486 05:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.486 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.486 05:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.747 [2024-12-14 05:03:08.542917] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.747 "name": "Existed_Raid", 00:13:57.747 "uuid": "771e1dd1-af89-401d-b8f4-ebaeb9636063", 00:13:57.747 "strip_size_kb": 64, 00:13:57.747 "state": "online", 00:13:57.747 "raid_level": "raid5f", 00:13:57.747 "superblock": false, 00:13:57.747 "num_base_bdevs": 4, 00:13:57.747 "num_base_bdevs_discovered": 3, 00:13:57.747 "num_base_bdevs_operational": 3, 00:13:57.747 "base_bdevs_list": [ 00:13:57.747 { 00:13:57.747 "name": null, 00:13:57.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.747 "is_configured": false, 00:13:57.747 "data_offset": 0, 00:13:57.747 "data_size": 65536 00:13:57.747 }, 00:13:57.747 { 00:13:57.747 "name": "BaseBdev2", 00:13:57.747 "uuid": "190e9cae-481a-4b2b-b42e-0d59c8906e25", 00:13:57.747 "is_configured": true, 00:13:57.747 "data_offset": 0, 00:13:57.747 "data_size": 65536 00:13:57.747 }, 00:13:57.747 { 00:13:57.747 "name": "BaseBdev3", 00:13:57.747 "uuid": "fcfe747b-2fb3-4eb1-b936-b481f3150bbe", 00:13:57.747 "is_configured": true, 00:13:57.747 "data_offset": 0, 00:13:57.747 "data_size": 65536 00:13:57.747 }, 00:13:57.747 { 00:13:57.747 "name": "BaseBdev4", 00:13:57.747 "uuid": "48d793c0-5ee9-49a9-89d9-487c3d59c4ea", 00:13:57.747 "is_configured": true, 00:13:57.747 "data_offset": 0, 00:13:57.747 "data_size": 65536 00:13:57.747 } 00:13:57.747 ] 00:13:57.747 }' 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.747 05:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.317 05:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.317 [2024-12-14 05:03:09.057499] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:58.317 [2024-12-14 05:03:09.057587] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:58.317 [2024-12-14 05:03:09.068695] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.317 [2024-12-14 05:03:09.128599] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.317 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.317 [2024-12-14 05:03:09.195088] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:58.317 [2024-12-14 05:03:09.195135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.578 BaseBdev2 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.578 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.578 [ 00:13:58.578 { 00:13:58.578 "name": "BaseBdev2", 00:13:58.578 "aliases": [ 00:13:58.578 "4bcf8d08-d2c2-4390-b7ee-82373c3ef60c" 00:13:58.578 ], 00:13:58.578 "product_name": "Malloc disk", 00:13:58.578 "block_size": 512, 00:13:58.578 "num_blocks": 65536, 00:13:58.578 "uuid": "4bcf8d08-d2c2-4390-b7ee-82373c3ef60c", 00:13:58.578 "assigned_rate_limits": { 00:13:58.578 "rw_ios_per_sec": 0, 00:13:58.578 "rw_mbytes_per_sec": 0, 00:13:58.578 "r_mbytes_per_sec": 0, 00:13:58.578 "w_mbytes_per_sec": 0 00:13:58.578 }, 00:13:58.578 "claimed": false, 00:13:58.578 "zoned": false, 00:13:58.578 "supported_io_types": { 00:13:58.578 "read": true, 00:13:58.578 "write": true, 00:13:58.578 "unmap": true, 00:13:58.578 "flush": true, 00:13:58.578 "reset": true, 00:13:58.578 "nvme_admin": false, 00:13:58.578 "nvme_io": false, 00:13:58.578 "nvme_io_md": false, 00:13:58.578 "write_zeroes": true, 00:13:58.578 "zcopy": true, 00:13:58.578 "get_zone_info": false, 00:13:58.578 "zone_management": false, 00:13:58.579 "zone_append": false, 00:13:58.579 "compare": false, 00:13:58.579 "compare_and_write": false, 00:13:58.579 "abort": true, 00:13:58.579 "seek_hole": false, 00:13:58.579 "seek_data": false, 00:13:58.579 "copy": true, 00:13:58.579 "nvme_iov_md": false 00:13:58.579 }, 00:13:58.579 "memory_domains": [ 00:13:58.579 { 00:13:58.579 "dma_device_id": "system", 00:13:58.579 "dma_device_type": 1 00:13:58.579 }, 00:13:58.579 { 00:13:58.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.579 "dma_device_type": 2 00:13:58.579 } 00:13:58.579 ], 00:13:58.579 "driver_specific": {} 00:13:58.579 } 00:13:58.579 ] 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.579 BaseBdev3 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.579 [ 00:13:58.579 { 00:13:58.579 "name": "BaseBdev3", 00:13:58.579 "aliases": [ 00:13:58.579 "1d45acfb-4b00-4aad-82e2-e748163ba55b" 00:13:58.579 ], 00:13:58.579 "product_name": "Malloc disk", 00:13:58.579 "block_size": 512, 00:13:58.579 "num_blocks": 65536, 00:13:58.579 "uuid": "1d45acfb-4b00-4aad-82e2-e748163ba55b", 00:13:58.579 "assigned_rate_limits": { 00:13:58.579 "rw_ios_per_sec": 0, 00:13:58.579 "rw_mbytes_per_sec": 0, 00:13:58.579 "r_mbytes_per_sec": 0, 00:13:58.579 "w_mbytes_per_sec": 0 00:13:58.579 }, 00:13:58.579 "claimed": false, 00:13:58.579 "zoned": false, 00:13:58.579 "supported_io_types": { 00:13:58.579 "read": true, 00:13:58.579 "write": true, 00:13:58.579 "unmap": true, 00:13:58.579 "flush": true, 00:13:58.579 "reset": true, 00:13:58.579 "nvme_admin": false, 00:13:58.579 "nvme_io": false, 00:13:58.579 "nvme_io_md": false, 00:13:58.579 "write_zeroes": true, 00:13:58.579 "zcopy": true, 00:13:58.579 "get_zone_info": false, 00:13:58.579 "zone_management": false, 00:13:58.579 "zone_append": false, 00:13:58.579 "compare": false, 00:13:58.579 "compare_and_write": false, 00:13:58.579 "abort": true, 00:13:58.579 "seek_hole": false, 00:13:58.579 "seek_data": false, 00:13:58.579 "copy": true, 00:13:58.579 "nvme_iov_md": false 00:13:58.579 }, 00:13:58.579 "memory_domains": [ 00:13:58.579 { 00:13:58.579 "dma_device_id": "system", 00:13:58.579 "dma_device_type": 1 00:13:58.579 }, 00:13:58.579 { 00:13:58.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.579 "dma_device_type": 2 00:13:58.579 } 00:13:58.579 ], 00:13:58.579 "driver_specific": {} 00:13:58.579 } 00:13:58.579 ] 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.579 BaseBdev4 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.579 [ 00:13:58.579 { 00:13:58.579 "name": "BaseBdev4", 00:13:58.579 "aliases": [ 00:13:58.579 "d42cf0f1-e53b-4dca-91fb-d325c240c667" 00:13:58.579 ], 00:13:58.579 "product_name": "Malloc disk", 00:13:58.579 "block_size": 512, 00:13:58.579 "num_blocks": 65536, 00:13:58.579 "uuid": "d42cf0f1-e53b-4dca-91fb-d325c240c667", 00:13:58.579 "assigned_rate_limits": { 00:13:58.579 "rw_ios_per_sec": 0, 00:13:58.579 "rw_mbytes_per_sec": 0, 00:13:58.579 "r_mbytes_per_sec": 0, 00:13:58.579 "w_mbytes_per_sec": 0 00:13:58.579 }, 00:13:58.579 "claimed": false, 00:13:58.579 "zoned": false, 00:13:58.579 "supported_io_types": { 00:13:58.579 "read": true, 00:13:58.579 "write": true, 00:13:58.579 "unmap": true, 00:13:58.579 "flush": true, 00:13:58.579 "reset": true, 00:13:58.579 "nvme_admin": false, 00:13:58.579 "nvme_io": false, 00:13:58.579 "nvme_io_md": false, 00:13:58.579 "write_zeroes": true, 00:13:58.579 "zcopy": true, 00:13:58.579 "get_zone_info": false, 00:13:58.579 "zone_management": false, 00:13:58.579 "zone_append": false, 00:13:58.579 "compare": false, 00:13:58.579 "compare_and_write": false, 00:13:58.579 "abort": true, 00:13:58.579 "seek_hole": false, 00:13:58.579 "seek_data": false, 00:13:58.579 "copy": true, 00:13:58.579 "nvme_iov_md": false 00:13:58.579 }, 00:13:58.579 "memory_domains": [ 00:13:58.579 { 00:13:58.579 "dma_device_id": "system", 00:13:58.579 "dma_device_type": 1 00:13:58.579 }, 00:13:58.579 { 00:13:58.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.579 "dma_device_type": 2 00:13:58.579 } 00:13:58.579 ], 00:13:58.579 "driver_specific": {} 00:13:58.579 } 00:13:58.579 ] 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.579 [2024-12-14 05:03:09.427663] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:58.579 [2024-12-14 05:03:09.427803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:58.579 [2024-12-14 05:03:09.427845] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:58.579 [2024-12-14 05:03:09.429645] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:58.579 [2024-12-14 05:03:09.429736] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.579 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.580 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.580 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.580 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.580 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.839 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.839 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.839 "name": "Existed_Raid", 00:13:58.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.839 "strip_size_kb": 64, 00:13:58.839 "state": "configuring", 00:13:58.839 "raid_level": "raid5f", 00:13:58.839 "superblock": false, 00:13:58.839 "num_base_bdevs": 4, 00:13:58.839 "num_base_bdevs_discovered": 3, 00:13:58.839 "num_base_bdevs_operational": 4, 00:13:58.839 "base_bdevs_list": [ 00:13:58.839 { 00:13:58.839 "name": "BaseBdev1", 00:13:58.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.839 "is_configured": false, 00:13:58.839 "data_offset": 0, 00:13:58.839 "data_size": 0 00:13:58.839 }, 00:13:58.839 { 00:13:58.839 "name": "BaseBdev2", 00:13:58.839 "uuid": "4bcf8d08-d2c2-4390-b7ee-82373c3ef60c", 00:13:58.839 "is_configured": true, 00:13:58.839 "data_offset": 0, 00:13:58.839 "data_size": 65536 00:13:58.839 }, 00:13:58.839 { 00:13:58.839 "name": "BaseBdev3", 00:13:58.839 "uuid": "1d45acfb-4b00-4aad-82e2-e748163ba55b", 00:13:58.839 "is_configured": true, 00:13:58.839 "data_offset": 0, 00:13:58.839 "data_size": 65536 00:13:58.839 }, 00:13:58.839 { 00:13:58.839 "name": "BaseBdev4", 00:13:58.839 "uuid": "d42cf0f1-e53b-4dca-91fb-d325c240c667", 00:13:58.839 "is_configured": true, 00:13:58.839 "data_offset": 0, 00:13:58.839 "data_size": 65536 00:13:58.839 } 00:13:58.839 ] 00:13:58.839 }' 00:13:58.839 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.839 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.099 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:59.099 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.099 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.099 [2024-12-14 05:03:09.847075] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:59.099 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.099 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:59.099 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.099 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.099 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:59.099 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.099 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.099 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.099 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.099 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.099 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.099 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.099 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.099 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.099 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.099 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.099 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.099 "name": "Existed_Raid", 00:13:59.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.099 "strip_size_kb": 64, 00:13:59.099 "state": "configuring", 00:13:59.099 "raid_level": "raid5f", 00:13:59.099 "superblock": false, 00:13:59.099 "num_base_bdevs": 4, 00:13:59.099 "num_base_bdevs_discovered": 2, 00:13:59.099 "num_base_bdevs_operational": 4, 00:13:59.099 "base_bdevs_list": [ 00:13:59.099 { 00:13:59.099 "name": "BaseBdev1", 00:13:59.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.099 "is_configured": false, 00:13:59.099 "data_offset": 0, 00:13:59.099 "data_size": 0 00:13:59.099 }, 00:13:59.099 { 00:13:59.099 "name": null, 00:13:59.099 "uuid": "4bcf8d08-d2c2-4390-b7ee-82373c3ef60c", 00:13:59.100 "is_configured": false, 00:13:59.100 "data_offset": 0, 00:13:59.100 "data_size": 65536 00:13:59.100 }, 00:13:59.100 { 00:13:59.100 "name": "BaseBdev3", 00:13:59.100 "uuid": "1d45acfb-4b00-4aad-82e2-e748163ba55b", 00:13:59.100 "is_configured": true, 00:13:59.100 "data_offset": 0, 00:13:59.100 "data_size": 65536 00:13:59.100 }, 00:13:59.100 { 00:13:59.100 "name": "BaseBdev4", 00:13:59.100 "uuid": "d42cf0f1-e53b-4dca-91fb-d325c240c667", 00:13:59.100 "is_configured": true, 00:13:59.100 "data_offset": 0, 00:13:59.100 "data_size": 65536 00:13:59.100 } 00:13:59.100 ] 00:13:59.100 }' 00:13:59.100 05:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.100 05:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.670 [2024-12-14 05:03:10.317156] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:59.670 BaseBdev1 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.670 [ 00:13:59.670 { 00:13:59.670 "name": "BaseBdev1", 00:13:59.670 "aliases": [ 00:13:59.670 "e7beeb91-70c3-4dc3-840e-92ef366b0ee3" 00:13:59.670 ], 00:13:59.670 "product_name": "Malloc disk", 00:13:59.670 "block_size": 512, 00:13:59.670 "num_blocks": 65536, 00:13:59.670 "uuid": "e7beeb91-70c3-4dc3-840e-92ef366b0ee3", 00:13:59.670 "assigned_rate_limits": { 00:13:59.670 "rw_ios_per_sec": 0, 00:13:59.670 "rw_mbytes_per_sec": 0, 00:13:59.670 "r_mbytes_per_sec": 0, 00:13:59.670 "w_mbytes_per_sec": 0 00:13:59.670 }, 00:13:59.670 "claimed": true, 00:13:59.670 "claim_type": "exclusive_write", 00:13:59.670 "zoned": false, 00:13:59.670 "supported_io_types": { 00:13:59.670 "read": true, 00:13:59.670 "write": true, 00:13:59.670 "unmap": true, 00:13:59.670 "flush": true, 00:13:59.670 "reset": true, 00:13:59.670 "nvme_admin": false, 00:13:59.670 "nvme_io": false, 00:13:59.670 "nvme_io_md": false, 00:13:59.670 "write_zeroes": true, 00:13:59.670 "zcopy": true, 00:13:59.670 "get_zone_info": false, 00:13:59.670 "zone_management": false, 00:13:59.670 "zone_append": false, 00:13:59.670 "compare": false, 00:13:59.670 "compare_and_write": false, 00:13:59.670 "abort": true, 00:13:59.670 "seek_hole": false, 00:13:59.670 "seek_data": false, 00:13:59.670 "copy": true, 00:13:59.670 "nvme_iov_md": false 00:13:59.670 }, 00:13:59.670 "memory_domains": [ 00:13:59.670 { 00:13:59.670 "dma_device_id": "system", 00:13:59.670 "dma_device_type": 1 00:13:59.670 }, 00:13:59.670 { 00:13:59.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.670 "dma_device_type": 2 00:13:59.670 } 00:13:59.670 ], 00:13:59.670 "driver_specific": {} 00:13:59.670 } 00:13:59.670 ] 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.670 "name": "Existed_Raid", 00:13:59.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.670 "strip_size_kb": 64, 00:13:59.670 "state": "configuring", 00:13:59.670 "raid_level": "raid5f", 00:13:59.670 "superblock": false, 00:13:59.670 "num_base_bdevs": 4, 00:13:59.670 "num_base_bdevs_discovered": 3, 00:13:59.670 "num_base_bdevs_operational": 4, 00:13:59.670 "base_bdevs_list": [ 00:13:59.670 { 00:13:59.670 "name": "BaseBdev1", 00:13:59.670 "uuid": "e7beeb91-70c3-4dc3-840e-92ef366b0ee3", 00:13:59.670 "is_configured": true, 00:13:59.670 "data_offset": 0, 00:13:59.670 "data_size": 65536 00:13:59.670 }, 00:13:59.670 { 00:13:59.670 "name": null, 00:13:59.670 "uuid": "4bcf8d08-d2c2-4390-b7ee-82373c3ef60c", 00:13:59.670 "is_configured": false, 00:13:59.670 "data_offset": 0, 00:13:59.670 "data_size": 65536 00:13:59.670 }, 00:13:59.670 { 00:13:59.670 "name": "BaseBdev3", 00:13:59.670 "uuid": "1d45acfb-4b00-4aad-82e2-e748163ba55b", 00:13:59.670 "is_configured": true, 00:13:59.670 "data_offset": 0, 00:13:59.670 "data_size": 65536 00:13:59.670 }, 00:13:59.670 { 00:13:59.670 "name": "BaseBdev4", 00:13:59.670 "uuid": "d42cf0f1-e53b-4dca-91fb-d325c240c667", 00:13:59.670 "is_configured": true, 00:13:59.670 "data_offset": 0, 00:13:59.670 "data_size": 65536 00:13:59.670 } 00:13:59.670 ] 00:13:59.670 }' 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.670 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.930 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.930 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.930 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.930 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:59.930 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.190 [2024-12-14 05:03:10.844288] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.190 "name": "Existed_Raid", 00:14:00.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.190 "strip_size_kb": 64, 00:14:00.190 "state": "configuring", 00:14:00.190 "raid_level": "raid5f", 00:14:00.190 "superblock": false, 00:14:00.190 "num_base_bdevs": 4, 00:14:00.190 "num_base_bdevs_discovered": 2, 00:14:00.190 "num_base_bdevs_operational": 4, 00:14:00.190 "base_bdevs_list": [ 00:14:00.190 { 00:14:00.190 "name": "BaseBdev1", 00:14:00.190 "uuid": "e7beeb91-70c3-4dc3-840e-92ef366b0ee3", 00:14:00.190 "is_configured": true, 00:14:00.190 "data_offset": 0, 00:14:00.190 "data_size": 65536 00:14:00.190 }, 00:14:00.190 { 00:14:00.190 "name": null, 00:14:00.190 "uuid": "4bcf8d08-d2c2-4390-b7ee-82373c3ef60c", 00:14:00.190 "is_configured": false, 00:14:00.190 "data_offset": 0, 00:14:00.190 "data_size": 65536 00:14:00.190 }, 00:14:00.190 { 00:14:00.190 "name": null, 00:14:00.190 "uuid": "1d45acfb-4b00-4aad-82e2-e748163ba55b", 00:14:00.190 "is_configured": false, 00:14:00.190 "data_offset": 0, 00:14:00.190 "data_size": 65536 00:14:00.190 }, 00:14:00.190 { 00:14:00.190 "name": "BaseBdev4", 00:14:00.190 "uuid": "d42cf0f1-e53b-4dca-91fb-d325c240c667", 00:14:00.190 "is_configured": true, 00:14:00.190 "data_offset": 0, 00:14:00.190 "data_size": 65536 00:14:00.190 } 00:14:00.190 ] 00:14:00.190 }' 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.190 05:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.450 [2024-12-14 05:03:11.283601] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.450 "name": "Existed_Raid", 00:14:00.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.450 "strip_size_kb": 64, 00:14:00.450 "state": "configuring", 00:14:00.450 "raid_level": "raid5f", 00:14:00.450 "superblock": false, 00:14:00.450 "num_base_bdevs": 4, 00:14:00.450 "num_base_bdevs_discovered": 3, 00:14:00.450 "num_base_bdevs_operational": 4, 00:14:00.450 "base_bdevs_list": [ 00:14:00.450 { 00:14:00.450 "name": "BaseBdev1", 00:14:00.450 "uuid": "e7beeb91-70c3-4dc3-840e-92ef366b0ee3", 00:14:00.450 "is_configured": true, 00:14:00.450 "data_offset": 0, 00:14:00.450 "data_size": 65536 00:14:00.450 }, 00:14:00.450 { 00:14:00.450 "name": null, 00:14:00.450 "uuid": "4bcf8d08-d2c2-4390-b7ee-82373c3ef60c", 00:14:00.450 "is_configured": false, 00:14:00.450 "data_offset": 0, 00:14:00.450 "data_size": 65536 00:14:00.450 }, 00:14:00.450 { 00:14:00.450 "name": "BaseBdev3", 00:14:00.450 "uuid": "1d45acfb-4b00-4aad-82e2-e748163ba55b", 00:14:00.450 "is_configured": true, 00:14:00.450 "data_offset": 0, 00:14:00.450 "data_size": 65536 00:14:00.450 }, 00:14:00.450 { 00:14:00.450 "name": "BaseBdev4", 00:14:00.450 "uuid": "d42cf0f1-e53b-4dca-91fb-d325c240c667", 00:14:00.450 "is_configured": true, 00:14:00.450 "data_offset": 0, 00:14:00.450 "data_size": 65536 00:14:00.450 } 00:14:00.450 ] 00:14:00.450 }' 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.450 05:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.021 [2024-12-14 05:03:11.778779] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.021 "name": "Existed_Raid", 00:14:01.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.021 "strip_size_kb": 64, 00:14:01.021 "state": "configuring", 00:14:01.021 "raid_level": "raid5f", 00:14:01.021 "superblock": false, 00:14:01.021 "num_base_bdevs": 4, 00:14:01.021 "num_base_bdevs_discovered": 2, 00:14:01.021 "num_base_bdevs_operational": 4, 00:14:01.021 "base_bdevs_list": [ 00:14:01.021 { 00:14:01.021 "name": null, 00:14:01.021 "uuid": "e7beeb91-70c3-4dc3-840e-92ef366b0ee3", 00:14:01.021 "is_configured": false, 00:14:01.021 "data_offset": 0, 00:14:01.021 "data_size": 65536 00:14:01.021 }, 00:14:01.021 { 00:14:01.021 "name": null, 00:14:01.021 "uuid": "4bcf8d08-d2c2-4390-b7ee-82373c3ef60c", 00:14:01.021 "is_configured": false, 00:14:01.021 "data_offset": 0, 00:14:01.021 "data_size": 65536 00:14:01.021 }, 00:14:01.021 { 00:14:01.021 "name": "BaseBdev3", 00:14:01.021 "uuid": "1d45acfb-4b00-4aad-82e2-e748163ba55b", 00:14:01.021 "is_configured": true, 00:14:01.021 "data_offset": 0, 00:14:01.021 "data_size": 65536 00:14:01.021 }, 00:14:01.021 { 00:14:01.021 "name": "BaseBdev4", 00:14:01.021 "uuid": "d42cf0f1-e53b-4dca-91fb-d325c240c667", 00:14:01.021 "is_configured": true, 00:14:01.021 "data_offset": 0, 00:14:01.021 "data_size": 65536 00:14:01.021 } 00:14:01.021 ] 00:14:01.021 }' 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.021 05:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.591 [2024-12-14 05:03:12.280576] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.591 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.591 "name": "Existed_Raid", 00:14:01.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.591 "strip_size_kb": 64, 00:14:01.591 "state": "configuring", 00:14:01.591 "raid_level": "raid5f", 00:14:01.591 "superblock": false, 00:14:01.591 "num_base_bdevs": 4, 00:14:01.591 "num_base_bdevs_discovered": 3, 00:14:01.591 "num_base_bdevs_operational": 4, 00:14:01.591 "base_bdevs_list": [ 00:14:01.591 { 00:14:01.591 "name": null, 00:14:01.591 "uuid": "e7beeb91-70c3-4dc3-840e-92ef366b0ee3", 00:14:01.591 "is_configured": false, 00:14:01.591 "data_offset": 0, 00:14:01.591 "data_size": 65536 00:14:01.591 }, 00:14:01.591 { 00:14:01.591 "name": "BaseBdev2", 00:14:01.591 "uuid": "4bcf8d08-d2c2-4390-b7ee-82373c3ef60c", 00:14:01.591 "is_configured": true, 00:14:01.591 "data_offset": 0, 00:14:01.591 "data_size": 65536 00:14:01.591 }, 00:14:01.591 { 00:14:01.591 "name": "BaseBdev3", 00:14:01.591 "uuid": "1d45acfb-4b00-4aad-82e2-e748163ba55b", 00:14:01.591 "is_configured": true, 00:14:01.591 "data_offset": 0, 00:14:01.591 "data_size": 65536 00:14:01.591 }, 00:14:01.591 { 00:14:01.591 "name": "BaseBdev4", 00:14:01.591 "uuid": "d42cf0f1-e53b-4dca-91fb-d325c240c667", 00:14:01.591 "is_configured": true, 00:14:01.591 "data_offset": 0, 00:14:01.591 "data_size": 65536 00:14:01.592 } 00:14:01.592 ] 00:14:01.592 }' 00:14:01.592 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.592 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.851 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.852 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.852 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.852 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e7beeb91-70c3-4dc3-840e-92ef366b0ee3 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.112 [2024-12-14 05:03:12.834437] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:02.112 [2024-12-14 05:03:12.834486] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:02.112 [2024-12-14 05:03:12.834494] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:02.112 [2024-12-14 05:03:12.834723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:02.112 [2024-12-14 05:03:12.835116] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:02.112 [2024-12-14 05:03:12.835129] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:02.112 [2024-12-14 05:03:12.835338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.112 NewBaseBdev 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.112 [ 00:14:02.112 { 00:14:02.112 "name": "NewBaseBdev", 00:14:02.112 "aliases": [ 00:14:02.112 "e7beeb91-70c3-4dc3-840e-92ef366b0ee3" 00:14:02.112 ], 00:14:02.112 "product_name": "Malloc disk", 00:14:02.112 "block_size": 512, 00:14:02.112 "num_blocks": 65536, 00:14:02.112 "uuid": "e7beeb91-70c3-4dc3-840e-92ef366b0ee3", 00:14:02.112 "assigned_rate_limits": { 00:14:02.112 "rw_ios_per_sec": 0, 00:14:02.112 "rw_mbytes_per_sec": 0, 00:14:02.112 "r_mbytes_per_sec": 0, 00:14:02.112 "w_mbytes_per_sec": 0 00:14:02.112 }, 00:14:02.112 "claimed": true, 00:14:02.112 "claim_type": "exclusive_write", 00:14:02.112 "zoned": false, 00:14:02.112 "supported_io_types": { 00:14:02.112 "read": true, 00:14:02.112 "write": true, 00:14:02.112 "unmap": true, 00:14:02.112 "flush": true, 00:14:02.112 "reset": true, 00:14:02.112 "nvme_admin": false, 00:14:02.112 "nvme_io": false, 00:14:02.112 "nvme_io_md": false, 00:14:02.112 "write_zeroes": true, 00:14:02.112 "zcopy": true, 00:14:02.112 "get_zone_info": false, 00:14:02.112 "zone_management": false, 00:14:02.112 "zone_append": false, 00:14:02.112 "compare": false, 00:14:02.112 "compare_and_write": false, 00:14:02.112 "abort": true, 00:14:02.112 "seek_hole": false, 00:14:02.112 "seek_data": false, 00:14:02.112 "copy": true, 00:14:02.112 "nvme_iov_md": false 00:14:02.112 }, 00:14:02.112 "memory_domains": [ 00:14:02.112 { 00:14:02.112 "dma_device_id": "system", 00:14:02.112 "dma_device_type": 1 00:14:02.112 }, 00:14:02.112 { 00:14:02.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.112 "dma_device_type": 2 00:14:02.112 } 00:14:02.112 ], 00:14:02.112 "driver_specific": {} 00:14:02.112 } 00:14:02.112 ] 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.112 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.113 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.113 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.113 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.113 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.113 "name": "Existed_Raid", 00:14:02.113 "uuid": "609eaea5-f4fa-440d-9f69-fd9399c591c7", 00:14:02.113 "strip_size_kb": 64, 00:14:02.113 "state": "online", 00:14:02.113 "raid_level": "raid5f", 00:14:02.113 "superblock": false, 00:14:02.113 "num_base_bdevs": 4, 00:14:02.113 "num_base_bdevs_discovered": 4, 00:14:02.113 "num_base_bdevs_operational": 4, 00:14:02.113 "base_bdevs_list": [ 00:14:02.113 { 00:14:02.113 "name": "NewBaseBdev", 00:14:02.113 "uuid": "e7beeb91-70c3-4dc3-840e-92ef366b0ee3", 00:14:02.113 "is_configured": true, 00:14:02.113 "data_offset": 0, 00:14:02.113 "data_size": 65536 00:14:02.113 }, 00:14:02.113 { 00:14:02.113 "name": "BaseBdev2", 00:14:02.113 "uuid": "4bcf8d08-d2c2-4390-b7ee-82373c3ef60c", 00:14:02.113 "is_configured": true, 00:14:02.113 "data_offset": 0, 00:14:02.113 "data_size": 65536 00:14:02.113 }, 00:14:02.113 { 00:14:02.113 "name": "BaseBdev3", 00:14:02.113 "uuid": "1d45acfb-4b00-4aad-82e2-e748163ba55b", 00:14:02.113 "is_configured": true, 00:14:02.113 "data_offset": 0, 00:14:02.113 "data_size": 65536 00:14:02.113 }, 00:14:02.113 { 00:14:02.113 "name": "BaseBdev4", 00:14:02.113 "uuid": "d42cf0f1-e53b-4dca-91fb-d325c240c667", 00:14:02.113 "is_configured": true, 00:14:02.113 "data_offset": 0, 00:14:02.113 "data_size": 65536 00:14:02.113 } 00:14:02.113 ] 00:14:02.113 }' 00:14:02.113 05:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.113 05:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.683 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:02.683 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:02.683 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:02.683 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:02.683 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:02.683 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:02.683 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:02.683 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:02.683 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.683 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.683 [2024-12-14 05:03:13.285847] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.683 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.683 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:02.683 "name": "Existed_Raid", 00:14:02.683 "aliases": [ 00:14:02.683 "609eaea5-f4fa-440d-9f69-fd9399c591c7" 00:14:02.683 ], 00:14:02.683 "product_name": "Raid Volume", 00:14:02.683 "block_size": 512, 00:14:02.683 "num_blocks": 196608, 00:14:02.683 "uuid": "609eaea5-f4fa-440d-9f69-fd9399c591c7", 00:14:02.683 "assigned_rate_limits": { 00:14:02.683 "rw_ios_per_sec": 0, 00:14:02.683 "rw_mbytes_per_sec": 0, 00:14:02.683 "r_mbytes_per_sec": 0, 00:14:02.683 "w_mbytes_per_sec": 0 00:14:02.683 }, 00:14:02.683 "claimed": false, 00:14:02.683 "zoned": false, 00:14:02.683 "supported_io_types": { 00:14:02.683 "read": true, 00:14:02.683 "write": true, 00:14:02.683 "unmap": false, 00:14:02.683 "flush": false, 00:14:02.683 "reset": true, 00:14:02.683 "nvme_admin": false, 00:14:02.683 "nvme_io": false, 00:14:02.683 "nvme_io_md": false, 00:14:02.683 "write_zeroes": true, 00:14:02.683 "zcopy": false, 00:14:02.683 "get_zone_info": false, 00:14:02.683 "zone_management": false, 00:14:02.683 "zone_append": false, 00:14:02.683 "compare": false, 00:14:02.683 "compare_and_write": false, 00:14:02.683 "abort": false, 00:14:02.683 "seek_hole": false, 00:14:02.683 "seek_data": false, 00:14:02.683 "copy": false, 00:14:02.683 "nvme_iov_md": false 00:14:02.683 }, 00:14:02.683 "driver_specific": { 00:14:02.683 "raid": { 00:14:02.683 "uuid": "609eaea5-f4fa-440d-9f69-fd9399c591c7", 00:14:02.683 "strip_size_kb": 64, 00:14:02.683 "state": "online", 00:14:02.683 "raid_level": "raid5f", 00:14:02.683 "superblock": false, 00:14:02.683 "num_base_bdevs": 4, 00:14:02.683 "num_base_bdevs_discovered": 4, 00:14:02.683 "num_base_bdevs_operational": 4, 00:14:02.683 "base_bdevs_list": [ 00:14:02.683 { 00:14:02.683 "name": "NewBaseBdev", 00:14:02.683 "uuid": "e7beeb91-70c3-4dc3-840e-92ef366b0ee3", 00:14:02.683 "is_configured": true, 00:14:02.683 "data_offset": 0, 00:14:02.683 "data_size": 65536 00:14:02.683 }, 00:14:02.683 { 00:14:02.683 "name": "BaseBdev2", 00:14:02.683 "uuid": "4bcf8d08-d2c2-4390-b7ee-82373c3ef60c", 00:14:02.683 "is_configured": true, 00:14:02.683 "data_offset": 0, 00:14:02.683 "data_size": 65536 00:14:02.683 }, 00:14:02.683 { 00:14:02.683 "name": "BaseBdev3", 00:14:02.683 "uuid": "1d45acfb-4b00-4aad-82e2-e748163ba55b", 00:14:02.683 "is_configured": true, 00:14:02.683 "data_offset": 0, 00:14:02.683 "data_size": 65536 00:14:02.683 }, 00:14:02.683 { 00:14:02.683 "name": "BaseBdev4", 00:14:02.683 "uuid": "d42cf0f1-e53b-4dca-91fb-d325c240c667", 00:14:02.683 "is_configured": true, 00:14:02.683 "data_offset": 0, 00:14:02.683 "data_size": 65536 00:14:02.683 } 00:14:02.683 ] 00:14:02.683 } 00:14:02.683 } 00:14:02.683 }' 00:14:02.683 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:02.683 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:02.683 BaseBdev2 00:14:02.683 BaseBdev3 00:14:02.683 BaseBdev4' 00:14:02.683 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.684 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.944 [2024-12-14 05:03:13.629098] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:02.944 [2024-12-14 05:03:13.629189] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:02.944 [2024-12-14 05:03:13.629269] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.944 [2024-12-14 05:03:13.629559] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:02.944 [2024-12-14 05:03:13.629611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 93258 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 93258 ']' 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 93258 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93258 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:02.944 killing process with pid 93258 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93258' 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 93258 00:14:02.944 [2024-12-14 05:03:13.678442] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:02.944 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 93258 00:14:02.944 [2024-12-14 05:03:13.719808] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:03.205 05:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:03.205 ************************************ 00:14:03.205 END TEST raid5f_state_function_test 00:14:03.205 ************************************ 00:14:03.205 00:14:03.205 real 0m9.692s 00:14:03.205 user 0m16.493s 00:14:03.205 sys 0m2.149s 00:14:03.205 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:03.205 05:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.205 05:03:14 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:14:03.205 05:03:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:03.205 05:03:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:03.205 05:03:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:03.205 ************************************ 00:14:03.205 START TEST raid5f_state_function_test_sb 00:14:03.205 ************************************ 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=93907 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93907' 00:14:03.205 Process raid pid: 93907 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 93907 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 93907 ']' 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:03.205 05:03:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.466 [2024-12-14 05:03:14.156050] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:03.466 [2024-12-14 05:03:14.156258] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.466 [2024-12-14 05:03:14.316398] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.726 [2024-12-14 05:03:14.364870] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.726 [2024-12-14 05:03:14.407646] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.726 [2024-12-14 05:03:14.407765] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.295 05:03:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:04.295 05:03:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:04.295 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:04.295 05:03:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.295 05:03:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.295 [2024-12-14 05:03:14.989436] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:04.295 [2024-12-14 05:03:14.989555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:04.295 [2024-12-14 05:03:14.989586] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:04.295 [2024-12-14 05:03:14.989608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:04.295 [2024-12-14 05:03:14.989625] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:04.295 [2024-12-14 05:03:14.989649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:04.295 [2024-12-14 05:03:14.989666] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:04.295 [2024-12-14 05:03:14.989686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:04.295 05:03:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.295 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:04.295 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.295 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.295 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:04.295 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.295 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.295 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.295 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.295 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.296 05:03:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.296 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.296 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.296 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.296 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.296 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.296 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.296 "name": "Existed_Raid", 00:14:04.296 "uuid": "1dfc4932-9a64-48fe-8dd1-094029461259", 00:14:04.296 "strip_size_kb": 64, 00:14:04.296 "state": "configuring", 00:14:04.296 "raid_level": "raid5f", 00:14:04.296 "superblock": true, 00:14:04.296 "num_base_bdevs": 4, 00:14:04.296 "num_base_bdevs_discovered": 0, 00:14:04.296 "num_base_bdevs_operational": 4, 00:14:04.296 "base_bdevs_list": [ 00:14:04.296 { 00:14:04.296 "name": "BaseBdev1", 00:14:04.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.296 "is_configured": false, 00:14:04.296 "data_offset": 0, 00:14:04.296 "data_size": 0 00:14:04.296 }, 00:14:04.296 { 00:14:04.296 "name": "BaseBdev2", 00:14:04.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.296 "is_configured": false, 00:14:04.296 "data_offset": 0, 00:14:04.296 "data_size": 0 00:14:04.296 }, 00:14:04.296 { 00:14:04.296 "name": "BaseBdev3", 00:14:04.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.296 "is_configured": false, 00:14:04.296 "data_offset": 0, 00:14:04.296 "data_size": 0 00:14:04.296 }, 00:14:04.296 { 00:14:04.296 "name": "BaseBdev4", 00:14:04.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.296 "is_configured": false, 00:14:04.296 "data_offset": 0, 00:14:04.296 "data_size": 0 00:14:04.296 } 00:14:04.296 ] 00:14:04.296 }' 00:14:04.296 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.296 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.865 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:04.865 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.865 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.865 [2024-12-14 05:03:15.440546] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:04.865 [2024-12-14 05:03:15.440585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:14:04.865 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.865 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:04.865 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.865 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.865 [2024-12-14 05:03:15.452571] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:04.865 [2024-12-14 05:03:15.452650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:04.865 [2024-12-14 05:03:15.452692] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:04.865 [2024-12-14 05:03:15.452714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:04.865 [2024-12-14 05:03:15.452732] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:04.865 [2024-12-14 05:03:15.452752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:04.865 [2024-12-14 05:03:15.452770] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:04.865 [2024-12-14 05:03:15.452789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:04.865 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.865 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:04.865 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.865 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.865 [2024-12-14 05:03:15.473375] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.865 BaseBdev1 00:14:04.865 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.865 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:04.865 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:04.865 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:04.865 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:04.865 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:04.865 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:04.865 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:04.865 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.865 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.866 [ 00:14:04.866 { 00:14:04.866 "name": "BaseBdev1", 00:14:04.866 "aliases": [ 00:14:04.866 "1ac10483-04b6-4251-98d0-4e4b3917fb2d" 00:14:04.866 ], 00:14:04.866 "product_name": "Malloc disk", 00:14:04.866 "block_size": 512, 00:14:04.866 "num_blocks": 65536, 00:14:04.866 "uuid": "1ac10483-04b6-4251-98d0-4e4b3917fb2d", 00:14:04.866 "assigned_rate_limits": { 00:14:04.866 "rw_ios_per_sec": 0, 00:14:04.866 "rw_mbytes_per_sec": 0, 00:14:04.866 "r_mbytes_per_sec": 0, 00:14:04.866 "w_mbytes_per_sec": 0 00:14:04.866 }, 00:14:04.866 "claimed": true, 00:14:04.866 "claim_type": "exclusive_write", 00:14:04.866 "zoned": false, 00:14:04.866 "supported_io_types": { 00:14:04.866 "read": true, 00:14:04.866 "write": true, 00:14:04.866 "unmap": true, 00:14:04.866 "flush": true, 00:14:04.866 "reset": true, 00:14:04.866 "nvme_admin": false, 00:14:04.866 "nvme_io": false, 00:14:04.866 "nvme_io_md": false, 00:14:04.866 "write_zeroes": true, 00:14:04.866 "zcopy": true, 00:14:04.866 "get_zone_info": false, 00:14:04.866 "zone_management": false, 00:14:04.866 "zone_append": false, 00:14:04.866 "compare": false, 00:14:04.866 "compare_and_write": false, 00:14:04.866 "abort": true, 00:14:04.866 "seek_hole": false, 00:14:04.866 "seek_data": false, 00:14:04.866 "copy": true, 00:14:04.866 "nvme_iov_md": false 00:14:04.866 }, 00:14:04.866 "memory_domains": [ 00:14:04.866 { 00:14:04.866 "dma_device_id": "system", 00:14:04.866 "dma_device_type": 1 00:14:04.866 }, 00:14:04.866 { 00:14:04.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.866 "dma_device_type": 2 00:14:04.866 } 00:14:04.866 ], 00:14:04.866 "driver_specific": {} 00:14:04.866 } 00:14:04.866 ] 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.866 "name": "Existed_Raid", 00:14:04.866 "uuid": "614651f7-73fd-45bc-906e-88168612a05d", 00:14:04.866 "strip_size_kb": 64, 00:14:04.866 "state": "configuring", 00:14:04.866 "raid_level": "raid5f", 00:14:04.866 "superblock": true, 00:14:04.866 "num_base_bdevs": 4, 00:14:04.866 "num_base_bdevs_discovered": 1, 00:14:04.866 "num_base_bdevs_operational": 4, 00:14:04.866 "base_bdevs_list": [ 00:14:04.866 { 00:14:04.866 "name": "BaseBdev1", 00:14:04.866 "uuid": "1ac10483-04b6-4251-98d0-4e4b3917fb2d", 00:14:04.866 "is_configured": true, 00:14:04.866 "data_offset": 2048, 00:14:04.866 "data_size": 63488 00:14:04.866 }, 00:14:04.866 { 00:14:04.866 "name": "BaseBdev2", 00:14:04.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.866 "is_configured": false, 00:14:04.866 "data_offset": 0, 00:14:04.866 "data_size": 0 00:14:04.866 }, 00:14:04.866 { 00:14:04.866 "name": "BaseBdev3", 00:14:04.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.866 "is_configured": false, 00:14:04.866 "data_offset": 0, 00:14:04.866 "data_size": 0 00:14:04.866 }, 00:14:04.866 { 00:14:04.866 "name": "BaseBdev4", 00:14:04.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.866 "is_configured": false, 00:14:04.866 "data_offset": 0, 00:14:04.866 "data_size": 0 00:14:04.866 } 00:14:04.866 ] 00:14:04.866 }' 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.866 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.126 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:05.126 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.126 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.126 [2024-12-14 05:03:15.980547] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:05.126 [2024-12-14 05:03:15.980593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:14:05.126 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.126 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:05.126 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.126 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.126 [2024-12-14 05:03:15.992565] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:05.126 [2024-12-14 05:03:15.994313] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:05.126 [2024-12-14 05:03:15.994354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:05.126 [2024-12-14 05:03:15.994363] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:05.126 [2024-12-14 05:03:15.994372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:05.126 [2024-12-14 05:03:15.994378] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:05.126 [2024-12-14 05:03:15.994386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:05.126 05:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.126 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:05.126 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:05.126 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:05.126 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.126 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.126 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:05.126 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.126 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.126 05:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.126 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.126 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.126 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.126 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.126 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.126 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.386 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.386 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.386 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.386 "name": "Existed_Raid", 00:14:05.386 "uuid": "181e7c45-d659-4a1e-b853-77efb81cbcc2", 00:14:05.386 "strip_size_kb": 64, 00:14:05.386 "state": "configuring", 00:14:05.386 "raid_level": "raid5f", 00:14:05.386 "superblock": true, 00:14:05.386 "num_base_bdevs": 4, 00:14:05.386 "num_base_bdevs_discovered": 1, 00:14:05.386 "num_base_bdevs_operational": 4, 00:14:05.386 "base_bdevs_list": [ 00:14:05.386 { 00:14:05.386 "name": "BaseBdev1", 00:14:05.386 "uuid": "1ac10483-04b6-4251-98d0-4e4b3917fb2d", 00:14:05.386 "is_configured": true, 00:14:05.386 "data_offset": 2048, 00:14:05.386 "data_size": 63488 00:14:05.386 }, 00:14:05.386 { 00:14:05.386 "name": "BaseBdev2", 00:14:05.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.386 "is_configured": false, 00:14:05.386 "data_offset": 0, 00:14:05.386 "data_size": 0 00:14:05.386 }, 00:14:05.386 { 00:14:05.386 "name": "BaseBdev3", 00:14:05.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.386 "is_configured": false, 00:14:05.386 "data_offset": 0, 00:14:05.386 "data_size": 0 00:14:05.386 }, 00:14:05.386 { 00:14:05.386 "name": "BaseBdev4", 00:14:05.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.386 "is_configured": false, 00:14:05.386 "data_offset": 0, 00:14:05.386 "data_size": 0 00:14:05.386 } 00:14:05.386 ] 00:14:05.386 }' 00:14:05.386 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.386 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.647 [2024-12-14 05:03:16.459923] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:05.647 BaseBdev2 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.647 [ 00:14:05.647 { 00:14:05.647 "name": "BaseBdev2", 00:14:05.647 "aliases": [ 00:14:05.647 "3b9c510a-5b2f-45cd-9a22-518c3c4a0a80" 00:14:05.647 ], 00:14:05.647 "product_name": "Malloc disk", 00:14:05.647 "block_size": 512, 00:14:05.647 "num_blocks": 65536, 00:14:05.647 "uuid": "3b9c510a-5b2f-45cd-9a22-518c3c4a0a80", 00:14:05.647 "assigned_rate_limits": { 00:14:05.647 "rw_ios_per_sec": 0, 00:14:05.647 "rw_mbytes_per_sec": 0, 00:14:05.647 "r_mbytes_per_sec": 0, 00:14:05.647 "w_mbytes_per_sec": 0 00:14:05.647 }, 00:14:05.647 "claimed": true, 00:14:05.647 "claim_type": "exclusive_write", 00:14:05.647 "zoned": false, 00:14:05.647 "supported_io_types": { 00:14:05.647 "read": true, 00:14:05.647 "write": true, 00:14:05.647 "unmap": true, 00:14:05.647 "flush": true, 00:14:05.647 "reset": true, 00:14:05.647 "nvme_admin": false, 00:14:05.647 "nvme_io": false, 00:14:05.647 "nvme_io_md": false, 00:14:05.647 "write_zeroes": true, 00:14:05.647 "zcopy": true, 00:14:05.647 "get_zone_info": false, 00:14:05.647 "zone_management": false, 00:14:05.647 "zone_append": false, 00:14:05.647 "compare": false, 00:14:05.647 "compare_and_write": false, 00:14:05.647 "abort": true, 00:14:05.647 "seek_hole": false, 00:14:05.647 "seek_data": false, 00:14:05.647 "copy": true, 00:14:05.647 "nvme_iov_md": false 00:14:05.647 }, 00:14:05.647 "memory_domains": [ 00:14:05.647 { 00:14:05.647 "dma_device_id": "system", 00:14:05.647 "dma_device_type": 1 00:14:05.647 }, 00:14:05.647 { 00:14:05.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.647 "dma_device_type": 2 00:14:05.647 } 00:14:05.647 ], 00:14:05.647 "driver_specific": {} 00:14:05.647 } 00:14:05.647 ] 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.647 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.907 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.907 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.907 "name": "Existed_Raid", 00:14:05.907 "uuid": "181e7c45-d659-4a1e-b853-77efb81cbcc2", 00:14:05.907 "strip_size_kb": 64, 00:14:05.907 "state": "configuring", 00:14:05.907 "raid_level": "raid5f", 00:14:05.907 "superblock": true, 00:14:05.907 "num_base_bdevs": 4, 00:14:05.907 "num_base_bdevs_discovered": 2, 00:14:05.907 "num_base_bdevs_operational": 4, 00:14:05.907 "base_bdevs_list": [ 00:14:05.907 { 00:14:05.907 "name": "BaseBdev1", 00:14:05.907 "uuid": "1ac10483-04b6-4251-98d0-4e4b3917fb2d", 00:14:05.907 "is_configured": true, 00:14:05.907 "data_offset": 2048, 00:14:05.907 "data_size": 63488 00:14:05.907 }, 00:14:05.907 { 00:14:05.907 "name": "BaseBdev2", 00:14:05.907 "uuid": "3b9c510a-5b2f-45cd-9a22-518c3c4a0a80", 00:14:05.907 "is_configured": true, 00:14:05.907 "data_offset": 2048, 00:14:05.907 "data_size": 63488 00:14:05.907 }, 00:14:05.907 { 00:14:05.907 "name": "BaseBdev3", 00:14:05.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.907 "is_configured": false, 00:14:05.907 "data_offset": 0, 00:14:05.907 "data_size": 0 00:14:05.907 }, 00:14:05.907 { 00:14:05.907 "name": "BaseBdev4", 00:14:05.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.907 "is_configured": false, 00:14:05.907 "data_offset": 0, 00:14:05.907 "data_size": 0 00:14:05.907 } 00:14:05.907 ] 00:14:05.907 }' 00:14:05.907 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.907 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.167 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:06.167 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.167 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.167 BaseBdev3 00:14:06.167 [2024-12-14 05:03:16.878285] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:06.167 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.167 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:06.167 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:06.167 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:06.167 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:06.167 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:06.167 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:06.167 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:06.167 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.167 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.167 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.167 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:06.167 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.167 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.167 [ 00:14:06.167 { 00:14:06.167 "name": "BaseBdev3", 00:14:06.167 "aliases": [ 00:14:06.167 "74a43844-6a72-4c00-b108-1de344fc1ccf" 00:14:06.167 ], 00:14:06.167 "product_name": "Malloc disk", 00:14:06.167 "block_size": 512, 00:14:06.167 "num_blocks": 65536, 00:14:06.167 "uuid": "74a43844-6a72-4c00-b108-1de344fc1ccf", 00:14:06.167 "assigned_rate_limits": { 00:14:06.167 "rw_ios_per_sec": 0, 00:14:06.167 "rw_mbytes_per_sec": 0, 00:14:06.167 "r_mbytes_per_sec": 0, 00:14:06.167 "w_mbytes_per_sec": 0 00:14:06.167 }, 00:14:06.167 "claimed": true, 00:14:06.167 "claim_type": "exclusive_write", 00:14:06.167 "zoned": false, 00:14:06.168 "supported_io_types": { 00:14:06.168 "read": true, 00:14:06.168 "write": true, 00:14:06.168 "unmap": true, 00:14:06.168 "flush": true, 00:14:06.168 "reset": true, 00:14:06.168 "nvme_admin": false, 00:14:06.168 "nvme_io": false, 00:14:06.168 "nvme_io_md": false, 00:14:06.168 "write_zeroes": true, 00:14:06.168 "zcopy": true, 00:14:06.168 "get_zone_info": false, 00:14:06.168 "zone_management": false, 00:14:06.168 "zone_append": false, 00:14:06.168 "compare": false, 00:14:06.168 "compare_and_write": false, 00:14:06.168 "abort": true, 00:14:06.168 "seek_hole": false, 00:14:06.168 "seek_data": false, 00:14:06.168 "copy": true, 00:14:06.168 "nvme_iov_md": false 00:14:06.168 }, 00:14:06.168 "memory_domains": [ 00:14:06.168 { 00:14:06.168 "dma_device_id": "system", 00:14:06.168 "dma_device_type": 1 00:14:06.168 }, 00:14:06.168 { 00:14:06.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.168 "dma_device_type": 2 00:14:06.168 } 00:14:06.168 ], 00:14:06.168 "driver_specific": {} 00:14:06.168 } 00:14:06.168 ] 00:14:06.168 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.168 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:06.168 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:06.168 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:06.168 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:06.168 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.168 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.168 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.168 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.168 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.168 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.168 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.168 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.168 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.168 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.168 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.168 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.168 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.168 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.168 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.168 "name": "Existed_Raid", 00:14:06.168 "uuid": "181e7c45-d659-4a1e-b853-77efb81cbcc2", 00:14:06.168 "strip_size_kb": 64, 00:14:06.168 "state": "configuring", 00:14:06.168 "raid_level": "raid5f", 00:14:06.168 "superblock": true, 00:14:06.168 "num_base_bdevs": 4, 00:14:06.168 "num_base_bdevs_discovered": 3, 00:14:06.168 "num_base_bdevs_operational": 4, 00:14:06.168 "base_bdevs_list": [ 00:14:06.168 { 00:14:06.168 "name": "BaseBdev1", 00:14:06.168 "uuid": "1ac10483-04b6-4251-98d0-4e4b3917fb2d", 00:14:06.168 "is_configured": true, 00:14:06.168 "data_offset": 2048, 00:14:06.168 "data_size": 63488 00:14:06.168 }, 00:14:06.168 { 00:14:06.168 "name": "BaseBdev2", 00:14:06.168 "uuid": "3b9c510a-5b2f-45cd-9a22-518c3c4a0a80", 00:14:06.168 "is_configured": true, 00:14:06.168 "data_offset": 2048, 00:14:06.168 "data_size": 63488 00:14:06.168 }, 00:14:06.168 { 00:14:06.168 "name": "BaseBdev3", 00:14:06.168 "uuid": "74a43844-6a72-4c00-b108-1de344fc1ccf", 00:14:06.168 "is_configured": true, 00:14:06.168 "data_offset": 2048, 00:14:06.168 "data_size": 63488 00:14:06.168 }, 00:14:06.168 { 00:14:06.168 "name": "BaseBdev4", 00:14:06.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.168 "is_configured": false, 00:14:06.168 "data_offset": 0, 00:14:06.168 "data_size": 0 00:14:06.168 } 00:14:06.168 ] 00:14:06.168 }' 00:14:06.168 05:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.168 05:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.738 [2024-12-14 05:03:17.328653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:06.738 [2024-12-14 05:03:17.328899] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:06.738 [2024-12-14 05:03:17.328920] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:06.738 [2024-12-14 05:03:17.329171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:06.738 BaseBdev4 00:14:06.738 [2024-12-14 05:03:17.329692] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:06.738 [2024-12-14 05:03:17.329708] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:14:06.738 [2024-12-14 05:03:17.329821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.738 [ 00:14:06.738 { 00:14:06.738 "name": "BaseBdev4", 00:14:06.738 "aliases": [ 00:14:06.738 "98170824-9a7d-4fb2-b4e0-f52441ad61bf" 00:14:06.738 ], 00:14:06.738 "product_name": "Malloc disk", 00:14:06.738 "block_size": 512, 00:14:06.738 "num_blocks": 65536, 00:14:06.738 "uuid": "98170824-9a7d-4fb2-b4e0-f52441ad61bf", 00:14:06.738 "assigned_rate_limits": { 00:14:06.738 "rw_ios_per_sec": 0, 00:14:06.738 "rw_mbytes_per_sec": 0, 00:14:06.738 "r_mbytes_per_sec": 0, 00:14:06.738 "w_mbytes_per_sec": 0 00:14:06.738 }, 00:14:06.738 "claimed": true, 00:14:06.738 "claim_type": "exclusive_write", 00:14:06.738 "zoned": false, 00:14:06.738 "supported_io_types": { 00:14:06.738 "read": true, 00:14:06.738 "write": true, 00:14:06.738 "unmap": true, 00:14:06.738 "flush": true, 00:14:06.738 "reset": true, 00:14:06.738 "nvme_admin": false, 00:14:06.738 "nvme_io": false, 00:14:06.738 "nvme_io_md": false, 00:14:06.738 "write_zeroes": true, 00:14:06.738 "zcopy": true, 00:14:06.738 "get_zone_info": false, 00:14:06.738 "zone_management": false, 00:14:06.738 "zone_append": false, 00:14:06.738 "compare": false, 00:14:06.738 "compare_and_write": false, 00:14:06.738 "abort": true, 00:14:06.738 "seek_hole": false, 00:14:06.738 "seek_data": false, 00:14:06.738 "copy": true, 00:14:06.738 "nvme_iov_md": false 00:14:06.738 }, 00:14:06.738 "memory_domains": [ 00:14:06.738 { 00:14:06.738 "dma_device_id": "system", 00:14:06.738 "dma_device_type": 1 00:14:06.738 }, 00:14:06.738 { 00:14:06.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.738 "dma_device_type": 2 00:14:06.738 } 00:14:06.738 ], 00:14:06.738 "driver_specific": {} 00:14:06.738 } 00:14:06.738 ] 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.738 "name": "Existed_Raid", 00:14:06.738 "uuid": "181e7c45-d659-4a1e-b853-77efb81cbcc2", 00:14:06.738 "strip_size_kb": 64, 00:14:06.738 "state": "online", 00:14:06.738 "raid_level": "raid5f", 00:14:06.738 "superblock": true, 00:14:06.738 "num_base_bdevs": 4, 00:14:06.738 "num_base_bdevs_discovered": 4, 00:14:06.738 "num_base_bdevs_operational": 4, 00:14:06.738 "base_bdevs_list": [ 00:14:06.738 { 00:14:06.738 "name": "BaseBdev1", 00:14:06.738 "uuid": "1ac10483-04b6-4251-98d0-4e4b3917fb2d", 00:14:06.738 "is_configured": true, 00:14:06.738 "data_offset": 2048, 00:14:06.738 "data_size": 63488 00:14:06.738 }, 00:14:06.738 { 00:14:06.738 "name": "BaseBdev2", 00:14:06.738 "uuid": "3b9c510a-5b2f-45cd-9a22-518c3c4a0a80", 00:14:06.738 "is_configured": true, 00:14:06.738 "data_offset": 2048, 00:14:06.738 "data_size": 63488 00:14:06.738 }, 00:14:06.738 { 00:14:06.738 "name": "BaseBdev3", 00:14:06.738 "uuid": "74a43844-6a72-4c00-b108-1de344fc1ccf", 00:14:06.738 "is_configured": true, 00:14:06.738 "data_offset": 2048, 00:14:06.738 "data_size": 63488 00:14:06.738 }, 00:14:06.738 { 00:14:06.738 "name": "BaseBdev4", 00:14:06.738 "uuid": "98170824-9a7d-4fb2-b4e0-f52441ad61bf", 00:14:06.738 "is_configured": true, 00:14:06.738 "data_offset": 2048, 00:14:06.738 "data_size": 63488 00:14:06.738 } 00:14:06.738 ] 00:14:06.738 }' 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.738 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.998 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:06.998 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:06.998 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:06.998 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:06.998 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:06.998 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:06.998 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:06.998 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:06.999 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.999 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.999 [2024-12-14 05:03:17.836083] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.999 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.999 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:06.999 "name": "Existed_Raid", 00:14:06.999 "aliases": [ 00:14:06.999 "181e7c45-d659-4a1e-b853-77efb81cbcc2" 00:14:06.999 ], 00:14:06.999 "product_name": "Raid Volume", 00:14:06.999 "block_size": 512, 00:14:06.999 "num_blocks": 190464, 00:14:06.999 "uuid": "181e7c45-d659-4a1e-b853-77efb81cbcc2", 00:14:06.999 "assigned_rate_limits": { 00:14:06.999 "rw_ios_per_sec": 0, 00:14:06.999 "rw_mbytes_per_sec": 0, 00:14:06.999 "r_mbytes_per_sec": 0, 00:14:06.999 "w_mbytes_per_sec": 0 00:14:06.999 }, 00:14:06.999 "claimed": false, 00:14:06.999 "zoned": false, 00:14:06.999 "supported_io_types": { 00:14:06.999 "read": true, 00:14:06.999 "write": true, 00:14:06.999 "unmap": false, 00:14:06.999 "flush": false, 00:14:06.999 "reset": true, 00:14:06.999 "nvme_admin": false, 00:14:06.999 "nvme_io": false, 00:14:06.999 "nvme_io_md": false, 00:14:06.999 "write_zeroes": true, 00:14:06.999 "zcopy": false, 00:14:06.999 "get_zone_info": false, 00:14:06.999 "zone_management": false, 00:14:06.999 "zone_append": false, 00:14:06.999 "compare": false, 00:14:06.999 "compare_and_write": false, 00:14:06.999 "abort": false, 00:14:06.999 "seek_hole": false, 00:14:06.999 "seek_data": false, 00:14:06.999 "copy": false, 00:14:06.999 "nvme_iov_md": false 00:14:06.999 }, 00:14:06.999 "driver_specific": { 00:14:06.999 "raid": { 00:14:06.999 "uuid": "181e7c45-d659-4a1e-b853-77efb81cbcc2", 00:14:06.999 "strip_size_kb": 64, 00:14:06.999 "state": "online", 00:14:06.999 "raid_level": "raid5f", 00:14:06.999 "superblock": true, 00:14:06.999 "num_base_bdevs": 4, 00:14:06.999 "num_base_bdevs_discovered": 4, 00:14:06.999 "num_base_bdevs_operational": 4, 00:14:06.999 "base_bdevs_list": [ 00:14:06.999 { 00:14:06.999 "name": "BaseBdev1", 00:14:06.999 "uuid": "1ac10483-04b6-4251-98d0-4e4b3917fb2d", 00:14:06.999 "is_configured": true, 00:14:06.999 "data_offset": 2048, 00:14:06.999 "data_size": 63488 00:14:06.999 }, 00:14:06.999 { 00:14:06.999 "name": "BaseBdev2", 00:14:06.999 "uuid": "3b9c510a-5b2f-45cd-9a22-518c3c4a0a80", 00:14:06.999 "is_configured": true, 00:14:06.999 "data_offset": 2048, 00:14:06.999 "data_size": 63488 00:14:06.999 }, 00:14:06.999 { 00:14:06.999 "name": "BaseBdev3", 00:14:06.999 "uuid": "74a43844-6a72-4c00-b108-1de344fc1ccf", 00:14:06.999 "is_configured": true, 00:14:06.999 "data_offset": 2048, 00:14:06.999 "data_size": 63488 00:14:06.999 }, 00:14:06.999 { 00:14:06.999 "name": "BaseBdev4", 00:14:06.999 "uuid": "98170824-9a7d-4fb2-b4e0-f52441ad61bf", 00:14:06.999 "is_configured": true, 00:14:06.999 "data_offset": 2048, 00:14:06.999 "data_size": 63488 00:14:06.999 } 00:14:06.999 ] 00:14:06.999 } 00:14:06.999 } 00:14:06.999 }' 00:14:06.999 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:07.259 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:07.259 BaseBdev2 00:14:07.259 BaseBdev3 00:14:07.259 BaseBdev4' 00:14:07.259 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.259 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:07.259 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.259 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.259 05:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:07.259 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.259 05:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.259 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.519 [2024-12-14 05:03:18.143441] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:07.519 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.519 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:07.519 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:07.520 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:07.520 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:07.520 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:07.520 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:07.520 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.520 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.520 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.520 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.520 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.520 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.520 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.520 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.520 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.520 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.520 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.520 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.520 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.520 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.520 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.520 "name": "Existed_Raid", 00:14:07.520 "uuid": "181e7c45-d659-4a1e-b853-77efb81cbcc2", 00:14:07.520 "strip_size_kb": 64, 00:14:07.520 "state": "online", 00:14:07.520 "raid_level": "raid5f", 00:14:07.520 "superblock": true, 00:14:07.520 "num_base_bdevs": 4, 00:14:07.520 "num_base_bdevs_discovered": 3, 00:14:07.520 "num_base_bdevs_operational": 3, 00:14:07.520 "base_bdevs_list": [ 00:14:07.520 { 00:14:07.520 "name": null, 00:14:07.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.520 "is_configured": false, 00:14:07.520 "data_offset": 0, 00:14:07.520 "data_size": 63488 00:14:07.520 }, 00:14:07.520 { 00:14:07.520 "name": "BaseBdev2", 00:14:07.520 "uuid": "3b9c510a-5b2f-45cd-9a22-518c3c4a0a80", 00:14:07.520 "is_configured": true, 00:14:07.520 "data_offset": 2048, 00:14:07.520 "data_size": 63488 00:14:07.520 }, 00:14:07.520 { 00:14:07.520 "name": "BaseBdev3", 00:14:07.520 "uuid": "74a43844-6a72-4c00-b108-1de344fc1ccf", 00:14:07.520 "is_configured": true, 00:14:07.520 "data_offset": 2048, 00:14:07.520 "data_size": 63488 00:14:07.520 }, 00:14:07.520 { 00:14:07.520 "name": "BaseBdev4", 00:14:07.520 "uuid": "98170824-9a7d-4fb2-b4e0-f52441ad61bf", 00:14:07.520 "is_configured": true, 00:14:07.520 "data_offset": 2048, 00:14:07.520 "data_size": 63488 00:14:07.520 } 00:14:07.520 ] 00:14:07.520 }' 00:14:07.520 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.520 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.780 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:07.780 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:07.780 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.780 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.780 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.780 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:07.780 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.780 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:07.780 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:07.780 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:07.780 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.780 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.780 [2024-12-14 05:03:18.654201] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:07.780 [2024-12-14 05:03:18.654345] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:08.040 [2024-12-14 05:03:18.665313] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.040 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.040 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:08.040 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:08.040 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.040 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:08.040 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.040 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.040 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.040 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:08.040 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:08.040 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:08.040 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.040 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.040 [2024-12-14 05:03:18.725217] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:08.040 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.040 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:08.040 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:08.040 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.040 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.040 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:08.040 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.040 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.041 [2024-12-14 05:03:18.792367] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:08.041 [2024-12-14 05:03:18.792485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.041 BaseBdev2 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.041 [ 00:14:08.041 { 00:14:08.041 "name": "BaseBdev2", 00:14:08.041 "aliases": [ 00:14:08.041 "4f32805c-3e0d-4d4a-b335-12d53fe4c2f3" 00:14:08.041 ], 00:14:08.041 "product_name": "Malloc disk", 00:14:08.041 "block_size": 512, 00:14:08.041 "num_blocks": 65536, 00:14:08.041 "uuid": "4f32805c-3e0d-4d4a-b335-12d53fe4c2f3", 00:14:08.041 "assigned_rate_limits": { 00:14:08.041 "rw_ios_per_sec": 0, 00:14:08.041 "rw_mbytes_per_sec": 0, 00:14:08.041 "r_mbytes_per_sec": 0, 00:14:08.041 "w_mbytes_per_sec": 0 00:14:08.041 }, 00:14:08.041 "claimed": false, 00:14:08.041 "zoned": false, 00:14:08.041 "supported_io_types": { 00:14:08.041 "read": true, 00:14:08.041 "write": true, 00:14:08.041 "unmap": true, 00:14:08.041 "flush": true, 00:14:08.041 "reset": true, 00:14:08.041 "nvme_admin": false, 00:14:08.041 "nvme_io": false, 00:14:08.041 "nvme_io_md": false, 00:14:08.041 "write_zeroes": true, 00:14:08.041 "zcopy": true, 00:14:08.041 "get_zone_info": false, 00:14:08.041 "zone_management": false, 00:14:08.041 "zone_append": false, 00:14:08.041 "compare": false, 00:14:08.041 "compare_and_write": false, 00:14:08.041 "abort": true, 00:14:08.041 "seek_hole": false, 00:14:08.041 "seek_data": false, 00:14:08.041 "copy": true, 00:14:08.041 "nvme_iov_md": false 00:14:08.041 }, 00:14:08.041 "memory_domains": [ 00:14:08.041 { 00:14:08.041 "dma_device_id": "system", 00:14:08.041 "dma_device_type": 1 00:14:08.041 }, 00:14:08.041 { 00:14:08.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.041 "dma_device_type": 2 00:14:08.041 } 00:14:08.041 ], 00:14:08.041 "driver_specific": {} 00:14:08.041 } 00:14:08.041 ] 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.041 BaseBdev3 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.041 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.302 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.302 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:08.302 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.302 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.302 [ 00:14:08.302 { 00:14:08.302 "name": "BaseBdev3", 00:14:08.302 "aliases": [ 00:14:08.302 "bd28abfe-fbe7-4904-b844-c258875bd5e0" 00:14:08.302 ], 00:14:08.302 "product_name": "Malloc disk", 00:14:08.302 "block_size": 512, 00:14:08.302 "num_blocks": 65536, 00:14:08.302 "uuid": "bd28abfe-fbe7-4904-b844-c258875bd5e0", 00:14:08.302 "assigned_rate_limits": { 00:14:08.302 "rw_ios_per_sec": 0, 00:14:08.302 "rw_mbytes_per_sec": 0, 00:14:08.302 "r_mbytes_per_sec": 0, 00:14:08.302 "w_mbytes_per_sec": 0 00:14:08.302 }, 00:14:08.302 "claimed": false, 00:14:08.302 "zoned": false, 00:14:08.302 "supported_io_types": { 00:14:08.302 "read": true, 00:14:08.302 "write": true, 00:14:08.302 "unmap": true, 00:14:08.302 "flush": true, 00:14:08.302 "reset": true, 00:14:08.302 "nvme_admin": false, 00:14:08.302 "nvme_io": false, 00:14:08.302 "nvme_io_md": false, 00:14:08.302 "write_zeroes": true, 00:14:08.302 "zcopy": true, 00:14:08.302 "get_zone_info": false, 00:14:08.302 "zone_management": false, 00:14:08.302 "zone_append": false, 00:14:08.302 "compare": false, 00:14:08.302 "compare_and_write": false, 00:14:08.303 "abort": true, 00:14:08.303 "seek_hole": false, 00:14:08.303 "seek_data": false, 00:14:08.303 "copy": true, 00:14:08.303 "nvme_iov_md": false 00:14:08.303 }, 00:14:08.303 "memory_domains": [ 00:14:08.303 { 00:14:08.303 "dma_device_id": "system", 00:14:08.303 "dma_device_type": 1 00:14:08.303 }, 00:14:08.303 { 00:14:08.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.303 "dma_device_type": 2 00:14:08.303 } 00:14:08.303 ], 00:14:08.303 "driver_specific": {} 00:14:08.303 } 00:14:08.303 ] 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.303 BaseBdev4 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.303 [ 00:14:08.303 { 00:14:08.303 "name": "BaseBdev4", 00:14:08.303 "aliases": [ 00:14:08.303 "eb5bbaca-45a5-413f-8808-0fed70a3a386" 00:14:08.303 ], 00:14:08.303 "product_name": "Malloc disk", 00:14:08.303 "block_size": 512, 00:14:08.303 "num_blocks": 65536, 00:14:08.303 "uuid": "eb5bbaca-45a5-413f-8808-0fed70a3a386", 00:14:08.303 "assigned_rate_limits": { 00:14:08.303 "rw_ios_per_sec": 0, 00:14:08.303 "rw_mbytes_per_sec": 0, 00:14:08.303 "r_mbytes_per_sec": 0, 00:14:08.303 "w_mbytes_per_sec": 0 00:14:08.303 }, 00:14:08.303 "claimed": false, 00:14:08.303 "zoned": false, 00:14:08.303 "supported_io_types": { 00:14:08.303 "read": true, 00:14:08.303 "write": true, 00:14:08.303 "unmap": true, 00:14:08.303 "flush": true, 00:14:08.303 "reset": true, 00:14:08.303 "nvme_admin": false, 00:14:08.303 "nvme_io": false, 00:14:08.303 "nvme_io_md": false, 00:14:08.303 "write_zeroes": true, 00:14:08.303 "zcopy": true, 00:14:08.303 "get_zone_info": false, 00:14:08.303 "zone_management": false, 00:14:08.303 "zone_append": false, 00:14:08.303 "compare": false, 00:14:08.303 "compare_and_write": false, 00:14:08.303 "abort": true, 00:14:08.303 "seek_hole": false, 00:14:08.303 "seek_data": false, 00:14:08.303 "copy": true, 00:14:08.303 "nvme_iov_md": false 00:14:08.303 }, 00:14:08.303 "memory_domains": [ 00:14:08.303 { 00:14:08.303 "dma_device_id": "system", 00:14:08.303 "dma_device_type": 1 00:14:08.303 }, 00:14:08.303 { 00:14:08.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.303 "dma_device_type": 2 00:14:08.303 } 00:14:08.303 ], 00:14:08.303 "driver_specific": {} 00:14:08.303 } 00:14:08.303 ] 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:08.303 05:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:08.303 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:08.303 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.303 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.303 [2024-12-14 05:03:19.007532] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:08.303 [2024-12-14 05:03:19.007673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:08.303 [2024-12-14 05:03:19.007713] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:08.303 [2024-12-14 05:03:19.009474] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:08.303 [2024-12-14 05:03:19.009577] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:08.303 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.303 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:08.303 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.303 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.303 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.303 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.303 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:08.303 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.303 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.303 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.303 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.303 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.303 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.303 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.303 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.303 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.303 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.303 "name": "Existed_Raid", 00:14:08.303 "uuid": "e1cec661-e47b-4ac1-9ffa-489653ebdbbb", 00:14:08.303 "strip_size_kb": 64, 00:14:08.303 "state": "configuring", 00:14:08.303 "raid_level": "raid5f", 00:14:08.303 "superblock": true, 00:14:08.303 "num_base_bdevs": 4, 00:14:08.303 "num_base_bdevs_discovered": 3, 00:14:08.303 "num_base_bdevs_operational": 4, 00:14:08.303 "base_bdevs_list": [ 00:14:08.303 { 00:14:08.303 "name": "BaseBdev1", 00:14:08.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.303 "is_configured": false, 00:14:08.303 "data_offset": 0, 00:14:08.303 "data_size": 0 00:14:08.303 }, 00:14:08.303 { 00:14:08.303 "name": "BaseBdev2", 00:14:08.303 "uuid": "4f32805c-3e0d-4d4a-b335-12d53fe4c2f3", 00:14:08.303 "is_configured": true, 00:14:08.303 "data_offset": 2048, 00:14:08.303 "data_size": 63488 00:14:08.303 }, 00:14:08.303 { 00:14:08.303 "name": "BaseBdev3", 00:14:08.303 "uuid": "bd28abfe-fbe7-4904-b844-c258875bd5e0", 00:14:08.303 "is_configured": true, 00:14:08.303 "data_offset": 2048, 00:14:08.303 "data_size": 63488 00:14:08.303 }, 00:14:08.303 { 00:14:08.303 "name": "BaseBdev4", 00:14:08.303 "uuid": "eb5bbaca-45a5-413f-8808-0fed70a3a386", 00:14:08.303 "is_configured": true, 00:14:08.303 "data_offset": 2048, 00:14:08.303 "data_size": 63488 00:14:08.303 } 00:14:08.303 ] 00:14:08.303 }' 00:14:08.303 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.303 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.564 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:08.564 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.564 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.564 [2024-12-14 05:03:19.374922] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:08.564 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.564 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:08.564 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.564 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.564 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.564 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.564 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:08.564 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.564 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.564 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.564 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.564 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.564 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.564 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.564 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.564 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.564 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.564 "name": "Existed_Raid", 00:14:08.564 "uuid": "e1cec661-e47b-4ac1-9ffa-489653ebdbbb", 00:14:08.564 "strip_size_kb": 64, 00:14:08.564 "state": "configuring", 00:14:08.564 "raid_level": "raid5f", 00:14:08.564 "superblock": true, 00:14:08.564 "num_base_bdevs": 4, 00:14:08.564 "num_base_bdevs_discovered": 2, 00:14:08.564 "num_base_bdevs_operational": 4, 00:14:08.564 "base_bdevs_list": [ 00:14:08.564 { 00:14:08.564 "name": "BaseBdev1", 00:14:08.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.564 "is_configured": false, 00:14:08.564 "data_offset": 0, 00:14:08.564 "data_size": 0 00:14:08.564 }, 00:14:08.564 { 00:14:08.564 "name": null, 00:14:08.564 "uuid": "4f32805c-3e0d-4d4a-b335-12d53fe4c2f3", 00:14:08.564 "is_configured": false, 00:14:08.564 "data_offset": 0, 00:14:08.564 "data_size": 63488 00:14:08.564 }, 00:14:08.564 { 00:14:08.564 "name": "BaseBdev3", 00:14:08.564 "uuid": "bd28abfe-fbe7-4904-b844-c258875bd5e0", 00:14:08.564 "is_configured": true, 00:14:08.564 "data_offset": 2048, 00:14:08.564 "data_size": 63488 00:14:08.564 }, 00:14:08.564 { 00:14:08.564 "name": "BaseBdev4", 00:14:08.564 "uuid": "eb5bbaca-45a5-413f-8808-0fed70a3a386", 00:14:08.564 "is_configured": true, 00:14:08.564 "data_offset": 2048, 00:14:08.564 "data_size": 63488 00:14:08.564 } 00:14:08.564 ] 00:14:08.564 }' 00:14:08.564 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.564 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.134 [2024-12-14 05:03:19.945020] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.134 BaseBdev1 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.134 [ 00:14:09.134 { 00:14:09.134 "name": "BaseBdev1", 00:14:09.134 "aliases": [ 00:14:09.134 "cfaf607a-96cb-4553-9370-b142278f8f3b" 00:14:09.134 ], 00:14:09.134 "product_name": "Malloc disk", 00:14:09.134 "block_size": 512, 00:14:09.134 "num_blocks": 65536, 00:14:09.134 "uuid": "cfaf607a-96cb-4553-9370-b142278f8f3b", 00:14:09.134 "assigned_rate_limits": { 00:14:09.134 "rw_ios_per_sec": 0, 00:14:09.134 "rw_mbytes_per_sec": 0, 00:14:09.134 "r_mbytes_per_sec": 0, 00:14:09.134 "w_mbytes_per_sec": 0 00:14:09.134 }, 00:14:09.134 "claimed": true, 00:14:09.134 "claim_type": "exclusive_write", 00:14:09.134 "zoned": false, 00:14:09.134 "supported_io_types": { 00:14:09.134 "read": true, 00:14:09.134 "write": true, 00:14:09.134 "unmap": true, 00:14:09.134 "flush": true, 00:14:09.134 "reset": true, 00:14:09.134 "nvme_admin": false, 00:14:09.134 "nvme_io": false, 00:14:09.134 "nvme_io_md": false, 00:14:09.134 "write_zeroes": true, 00:14:09.134 "zcopy": true, 00:14:09.134 "get_zone_info": false, 00:14:09.134 "zone_management": false, 00:14:09.134 "zone_append": false, 00:14:09.134 "compare": false, 00:14:09.134 "compare_and_write": false, 00:14:09.134 "abort": true, 00:14:09.134 "seek_hole": false, 00:14:09.134 "seek_data": false, 00:14:09.134 "copy": true, 00:14:09.134 "nvme_iov_md": false 00:14:09.134 }, 00:14:09.134 "memory_domains": [ 00:14:09.134 { 00:14:09.134 "dma_device_id": "system", 00:14:09.134 "dma_device_type": 1 00:14:09.134 }, 00:14:09.134 { 00:14:09.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.134 "dma_device_type": 2 00:14:09.134 } 00:14:09.134 ], 00:14:09.134 "driver_specific": {} 00:14:09.134 } 00:14:09.134 ] 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:09.134 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.135 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.135 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:09.135 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.135 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.135 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.135 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.135 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.135 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.135 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.135 05:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.135 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.135 05:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.135 05:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.394 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.394 "name": "Existed_Raid", 00:14:09.394 "uuid": "e1cec661-e47b-4ac1-9ffa-489653ebdbbb", 00:14:09.394 "strip_size_kb": 64, 00:14:09.394 "state": "configuring", 00:14:09.394 "raid_level": "raid5f", 00:14:09.394 "superblock": true, 00:14:09.394 "num_base_bdevs": 4, 00:14:09.394 "num_base_bdevs_discovered": 3, 00:14:09.394 "num_base_bdevs_operational": 4, 00:14:09.394 "base_bdevs_list": [ 00:14:09.394 { 00:14:09.394 "name": "BaseBdev1", 00:14:09.394 "uuid": "cfaf607a-96cb-4553-9370-b142278f8f3b", 00:14:09.394 "is_configured": true, 00:14:09.394 "data_offset": 2048, 00:14:09.394 "data_size": 63488 00:14:09.394 }, 00:14:09.394 { 00:14:09.394 "name": null, 00:14:09.394 "uuid": "4f32805c-3e0d-4d4a-b335-12d53fe4c2f3", 00:14:09.394 "is_configured": false, 00:14:09.394 "data_offset": 0, 00:14:09.394 "data_size": 63488 00:14:09.394 }, 00:14:09.394 { 00:14:09.394 "name": "BaseBdev3", 00:14:09.394 "uuid": "bd28abfe-fbe7-4904-b844-c258875bd5e0", 00:14:09.394 "is_configured": true, 00:14:09.394 "data_offset": 2048, 00:14:09.394 "data_size": 63488 00:14:09.394 }, 00:14:09.394 { 00:14:09.394 "name": "BaseBdev4", 00:14:09.394 "uuid": "eb5bbaca-45a5-413f-8808-0fed70a3a386", 00:14:09.394 "is_configured": true, 00:14:09.394 "data_offset": 2048, 00:14:09.394 "data_size": 63488 00:14:09.394 } 00:14:09.394 ] 00:14:09.394 }' 00:14:09.394 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.394 05:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.654 [2024-12-14 05:03:20.520076] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.654 05:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.914 05:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.914 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.914 "name": "Existed_Raid", 00:14:09.914 "uuid": "e1cec661-e47b-4ac1-9ffa-489653ebdbbb", 00:14:09.914 "strip_size_kb": 64, 00:14:09.914 "state": "configuring", 00:14:09.914 "raid_level": "raid5f", 00:14:09.914 "superblock": true, 00:14:09.914 "num_base_bdevs": 4, 00:14:09.914 "num_base_bdevs_discovered": 2, 00:14:09.914 "num_base_bdevs_operational": 4, 00:14:09.914 "base_bdevs_list": [ 00:14:09.914 { 00:14:09.914 "name": "BaseBdev1", 00:14:09.914 "uuid": "cfaf607a-96cb-4553-9370-b142278f8f3b", 00:14:09.914 "is_configured": true, 00:14:09.914 "data_offset": 2048, 00:14:09.914 "data_size": 63488 00:14:09.914 }, 00:14:09.914 { 00:14:09.914 "name": null, 00:14:09.914 "uuid": "4f32805c-3e0d-4d4a-b335-12d53fe4c2f3", 00:14:09.914 "is_configured": false, 00:14:09.914 "data_offset": 0, 00:14:09.914 "data_size": 63488 00:14:09.914 }, 00:14:09.914 { 00:14:09.914 "name": null, 00:14:09.914 "uuid": "bd28abfe-fbe7-4904-b844-c258875bd5e0", 00:14:09.914 "is_configured": false, 00:14:09.914 "data_offset": 0, 00:14:09.914 "data_size": 63488 00:14:09.914 }, 00:14:09.914 { 00:14:09.914 "name": "BaseBdev4", 00:14:09.914 "uuid": "eb5bbaca-45a5-413f-8808-0fed70a3a386", 00:14:09.914 "is_configured": true, 00:14:09.914 "data_offset": 2048, 00:14:09.914 "data_size": 63488 00:14:09.914 } 00:14:09.914 ] 00:14:09.914 }' 00:14:09.914 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.914 05:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.174 [2024-12-14 05:03:20.987412] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.174 05:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.174 05:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.174 05:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.174 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.174 "name": "Existed_Raid", 00:14:10.174 "uuid": "e1cec661-e47b-4ac1-9ffa-489653ebdbbb", 00:14:10.174 "strip_size_kb": 64, 00:14:10.174 "state": "configuring", 00:14:10.174 "raid_level": "raid5f", 00:14:10.174 "superblock": true, 00:14:10.174 "num_base_bdevs": 4, 00:14:10.174 "num_base_bdevs_discovered": 3, 00:14:10.174 "num_base_bdevs_operational": 4, 00:14:10.174 "base_bdevs_list": [ 00:14:10.174 { 00:14:10.174 "name": "BaseBdev1", 00:14:10.174 "uuid": "cfaf607a-96cb-4553-9370-b142278f8f3b", 00:14:10.174 "is_configured": true, 00:14:10.174 "data_offset": 2048, 00:14:10.174 "data_size": 63488 00:14:10.174 }, 00:14:10.174 { 00:14:10.174 "name": null, 00:14:10.174 "uuid": "4f32805c-3e0d-4d4a-b335-12d53fe4c2f3", 00:14:10.174 "is_configured": false, 00:14:10.174 "data_offset": 0, 00:14:10.174 "data_size": 63488 00:14:10.174 }, 00:14:10.174 { 00:14:10.174 "name": "BaseBdev3", 00:14:10.174 "uuid": "bd28abfe-fbe7-4904-b844-c258875bd5e0", 00:14:10.174 "is_configured": true, 00:14:10.174 "data_offset": 2048, 00:14:10.174 "data_size": 63488 00:14:10.174 }, 00:14:10.174 { 00:14:10.174 "name": "BaseBdev4", 00:14:10.174 "uuid": "eb5bbaca-45a5-413f-8808-0fed70a3a386", 00:14:10.174 "is_configured": true, 00:14:10.174 "data_offset": 2048, 00:14:10.174 "data_size": 63488 00:14:10.174 } 00:14:10.174 ] 00:14:10.174 }' 00:14:10.174 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.174 05:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.745 [2024-12-14 05:03:21.474680] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.745 "name": "Existed_Raid", 00:14:10.745 "uuid": "e1cec661-e47b-4ac1-9ffa-489653ebdbbb", 00:14:10.745 "strip_size_kb": 64, 00:14:10.745 "state": "configuring", 00:14:10.745 "raid_level": "raid5f", 00:14:10.745 "superblock": true, 00:14:10.745 "num_base_bdevs": 4, 00:14:10.745 "num_base_bdevs_discovered": 2, 00:14:10.745 "num_base_bdevs_operational": 4, 00:14:10.745 "base_bdevs_list": [ 00:14:10.745 { 00:14:10.745 "name": null, 00:14:10.745 "uuid": "cfaf607a-96cb-4553-9370-b142278f8f3b", 00:14:10.745 "is_configured": false, 00:14:10.745 "data_offset": 0, 00:14:10.745 "data_size": 63488 00:14:10.745 }, 00:14:10.745 { 00:14:10.745 "name": null, 00:14:10.745 "uuid": "4f32805c-3e0d-4d4a-b335-12d53fe4c2f3", 00:14:10.745 "is_configured": false, 00:14:10.745 "data_offset": 0, 00:14:10.745 "data_size": 63488 00:14:10.745 }, 00:14:10.745 { 00:14:10.745 "name": "BaseBdev3", 00:14:10.745 "uuid": "bd28abfe-fbe7-4904-b844-c258875bd5e0", 00:14:10.745 "is_configured": true, 00:14:10.745 "data_offset": 2048, 00:14:10.745 "data_size": 63488 00:14:10.745 }, 00:14:10.745 { 00:14:10.745 "name": "BaseBdev4", 00:14:10.745 "uuid": "eb5bbaca-45a5-413f-8808-0fed70a3a386", 00:14:10.745 "is_configured": true, 00:14:10.745 "data_offset": 2048, 00:14:10.745 "data_size": 63488 00:14:10.745 } 00:14:10.745 ] 00:14:10.745 }' 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.745 05:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.315 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.315 05:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.315 05:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.315 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:11.315 05:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.315 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:11.315 05:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:11.315 05:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.315 05:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.315 [2024-12-14 05:03:21.996338] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:11.315 05:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.315 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:11.315 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.315 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.315 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:11.315 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.315 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.315 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.315 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.315 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.315 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.315 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.315 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.315 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.315 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.315 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.315 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.315 "name": "Existed_Raid", 00:14:11.315 "uuid": "e1cec661-e47b-4ac1-9ffa-489653ebdbbb", 00:14:11.315 "strip_size_kb": 64, 00:14:11.315 "state": "configuring", 00:14:11.315 "raid_level": "raid5f", 00:14:11.315 "superblock": true, 00:14:11.315 "num_base_bdevs": 4, 00:14:11.315 "num_base_bdevs_discovered": 3, 00:14:11.315 "num_base_bdevs_operational": 4, 00:14:11.315 "base_bdevs_list": [ 00:14:11.315 { 00:14:11.315 "name": null, 00:14:11.315 "uuid": "cfaf607a-96cb-4553-9370-b142278f8f3b", 00:14:11.315 "is_configured": false, 00:14:11.315 "data_offset": 0, 00:14:11.315 "data_size": 63488 00:14:11.315 }, 00:14:11.315 { 00:14:11.315 "name": "BaseBdev2", 00:14:11.315 "uuid": "4f32805c-3e0d-4d4a-b335-12d53fe4c2f3", 00:14:11.315 "is_configured": true, 00:14:11.315 "data_offset": 2048, 00:14:11.315 "data_size": 63488 00:14:11.315 }, 00:14:11.315 { 00:14:11.315 "name": "BaseBdev3", 00:14:11.315 "uuid": "bd28abfe-fbe7-4904-b844-c258875bd5e0", 00:14:11.315 "is_configured": true, 00:14:11.315 "data_offset": 2048, 00:14:11.315 "data_size": 63488 00:14:11.315 }, 00:14:11.315 { 00:14:11.315 "name": "BaseBdev4", 00:14:11.315 "uuid": "eb5bbaca-45a5-413f-8808-0fed70a3a386", 00:14:11.315 "is_configured": true, 00:14:11.315 "data_offset": 2048, 00:14:11.315 "data_size": 63488 00:14:11.315 } 00:14:11.315 ] 00:14:11.315 }' 00:14:11.315 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.315 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cfaf607a-96cb-4553-9370-b142278f8f3b 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.885 [2024-12-14 05:03:22.566289] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:11.885 [2024-12-14 05:03:22.566467] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:11.885 [2024-12-14 05:03:22.566479] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:11.885 [2024-12-14 05:03:22.566708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:11.885 NewBaseBdev 00:14:11.885 [2024-12-14 05:03:22.567136] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:11.885 [2024-12-14 05:03:22.567151] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:11.885 [2024-12-14 05:03:22.567262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.885 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.885 [ 00:14:11.885 { 00:14:11.885 "name": "NewBaseBdev", 00:14:11.885 "aliases": [ 00:14:11.885 "cfaf607a-96cb-4553-9370-b142278f8f3b" 00:14:11.885 ], 00:14:11.885 "product_name": "Malloc disk", 00:14:11.885 "block_size": 512, 00:14:11.885 "num_blocks": 65536, 00:14:11.885 "uuid": "cfaf607a-96cb-4553-9370-b142278f8f3b", 00:14:11.885 "assigned_rate_limits": { 00:14:11.885 "rw_ios_per_sec": 0, 00:14:11.885 "rw_mbytes_per_sec": 0, 00:14:11.885 "r_mbytes_per_sec": 0, 00:14:11.885 "w_mbytes_per_sec": 0 00:14:11.885 }, 00:14:11.885 "claimed": true, 00:14:11.885 "claim_type": "exclusive_write", 00:14:11.885 "zoned": false, 00:14:11.885 "supported_io_types": { 00:14:11.885 "read": true, 00:14:11.885 "write": true, 00:14:11.885 "unmap": true, 00:14:11.885 "flush": true, 00:14:11.885 "reset": true, 00:14:11.885 "nvme_admin": false, 00:14:11.885 "nvme_io": false, 00:14:11.885 "nvme_io_md": false, 00:14:11.885 "write_zeroes": true, 00:14:11.885 "zcopy": true, 00:14:11.885 "get_zone_info": false, 00:14:11.885 "zone_management": false, 00:14:11.885 "zone_append": false, 00:14:11.885 "compare": false, 00:14:11.885 "compare_and_write": false, 00:14:11.885 "abort": true, 00:14:11.885 "seek_hole": false, 00:14:11.885 "seek_data": false, 00:14:11.885 "copy": true, 00:14:11.885 "nvme_iov_md": false 00:14:11.885 }, 00:14:11.885 "memory_domains": [ 00:14:11.885 { 00:14:11.885 "dma_device_id": "system", 00:14:11.885 "dma_device_type": 1 00:14:11.885 }, 00:14:11.885 { 00:14:11.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.886 "dma_device_type": 2 00:14:11.886 } 00:14:11.886 ], 00:14:11.886 "driver_specific": {} 00:14:11.886 } 00:14:11.886 ] 00:14:11.886 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.886 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:11.886 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:11.886 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.886 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.886 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:11.886 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.886 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.886 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.886 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.886 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.886 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.886 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.886 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.886 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.886 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.886 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.886 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.886 "name": "Existed_Raid", 00:14:11.886 "uuid": "e1cec661-e47b-4ac1-9ffa-489653ebdbbb", 00:14:11.886 "strip_size_kb": 64, 00:14:11.886 "state": "online", 00:14:11.886 "raid_level": "raid5f", 00:14:11.886 "superblock": true, 00:14:11.886 "num_base_bdevs": 4, 00:14:11.886 "num_base_bdevs_discovered": 4, 00:14:11.886 "num_base_bdevs_operational": 4, 00:14:11.886 "base_bdevs_list": [ 00:14:11.886 { 00:14:11.886 "name": "NewBaseBdev", 00:14:11.886 "uuid": "cfaf607a-96cb-4553-9370-b142278f8f3b", 00:14:11.886 "is_configured": true, 00:14:11.886 "data_offset": 2048, 00:14:11.886 "data_size": 63488 00:14:11.886 }, 00:14:11.886 { 00:14:11.886 "name": "BaseBdev2", 00:14:11.886 "uuid": "4f32805c-3e0d-4d4a-b335-12d53fe4c2f3", 00:14:11.886 "is_configured": true, 00:14:11.886 "data_offset": 2048, 00:14:11.886 "data_size": 63488 00:14:11.886 }, 00:14:11.886 { 00:14:11.886 "name": "BaseBdev3", 00:14:11.886 "uuid": "bd28abfe-fbe7-4904-b844-c258875bd5e0", 00:14:11.886 "is_configured": true, 00:14:11.886 "data_offset": 2048, 00:14:11.886 "data_size": 63488 00:14:11.886 }, 00:14:11.886 { 00:14:11.886 "name": "BaseBdev4", 00:14:11.886 "uuid": "eb5bbaca-45a5-413f-8808-0fed70a3a386", 00:14:11.886 "is_configured": true, 00:14:11.886 "data_offset": 2048, 00:14:11.886 "data_size": 63488 00:14:11.886 } 00:14:11.886 ] 00:14:11.886 }' 00:14:11.886 05:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.886 05:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.456 [2024-12-14 05:03:23.069624] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:12.456 "name": "Existed_Raid", 00:14:12.456 "aliases": [ 00:14:12.456 "e1cec661-e47b-4ac1-9ffa-489653ebdbbb" 00:14:12.456 ], 00:14:12.456 "product_name": "Raid Volume", 00:14:12.456 "block_size": 512, 00:14:12.456 "num_blocks": 190464, 00:14:12.456 "uuid": "e1cec661-e47b-4ac1-9ffa-489653ebdbbb", 00:14:12.456 "assigned_rate_limits": { 00:14:12.456 "rw_ios_per_sec": 0, 00:14:12.456 "rw_mbytes_per_sec": 0, 00:14:12.456 "r_mbytes_per_sec": 0, 00:14:12.456 "w_mbytes_per_sec": 0 00:14:12.456 }, 00:14:12.456 "claimed": false, 00:14:12.456 "zoned": false, 00:14:12.456 "supported_io_types": { 00:14:12.456 "read": true, 00:14:12.456 "write": true, 00:14:12.456 "unmap": false, 00:14:12.456 "flush": false, 00:14:12.456 "reset": true, 00:14:12.456 "nvme_admin": false, 00:14:12.456 "nvme_io": false, 00:14:12.456 "nvme_io_md": false, 00:14:12.456 "write_zeroes": true, 00:14:12.456 "zcopy": false, 00:14:12.456 "get_zone_info": false, 00:14:12.456 "zone_management": false, 00:14:12.456 "zone_append": false, 00:14:12.456 "compare": false, 00:14:12.456 "compare_and_write": false, 00:14:12.456 "abort": false, 00:14:12.456 "seek_hole": false, 00:14:12.456 "seek_data": false, 00:14:12.456 "copy": false, 00:14:12.456 "nvme_iov_md": false 00:14:12.456 }, 00:14:12.456 "driver_specific": { 00:14:12.456 "raid": { 00:14:12.456 "uuid": "e1cec661-e47b-4ac1-9ffa-489653ebdbbb", 00:14:12.456 "strip_size_kb": 64, 00:14:12.456 "state": "online", 00:14:12.456 "raid_level": "raid5f", 00:14:12.456 "superblock": true, 00:14:12.456 "num_base_bdevs": 4, 00:14:12.456 "num_base_bdevs_discovered": 4, 00:14:12.456 "num_base_bdevs_operational": 4, 00:14:12.456 "base_bdevs_list": [ 00:14:12.456 { 00:14:12.456 "name": "NewBaseBdev", 00:14:12.456 "uuid": "cfaf607a-96cb-4553-9370-b142278f8f3b", 00:14:12.456 "is_configured": true, 00:14:12.456 "data_offset": 2048, 00:14:12.456 "data_size": 63488 00:14:12.456 }, 00:14:12.456 { 00:14:12.456 "name": "BaseBdev2", 00:14:12.456 "uuid": "4f32805c-3e0d-4d4a-b335-12d53fe4c2f3", 00:14:12.456 "is_configured": true, 00:14:12.456 "data_offset": 2048, 00:14:12.456 "data_size": 63488 00:14:12.456 }, 00:14:12.456 { 00:14:12.456 "name": "BaseBdev3", 00:14:12.456 "uuid": "bd28abfe-fbe7-4904-b844-c258875bd5e0", 00:14:12.456 "is_configured": true, 00:14:12.456 "data_offset": 2048, 00:14:12.456 "data_size": 63488 00:14:12.456 }, 00:14:12.456 { 00:14:12.456 "name": "BaseBdev4", 00:14:12.456 "uuid": "eb5bbaca-45a5-413f-8808-0fed70a3a386", 00:14:12.456 "is_configured": true, 00:14:12.456 "data_offset": 2048, 00:14:12.456 "data_size": 63488 00:14:12.456 } 00:14:12.456 ] 00:14:12.456 } 00:14:12.456 } 00:14:12.456 }' 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:12.456 BaseBdev2 00:14:12.456 BaseBdev3 00:14:12.456 BaseBdev4' 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.456 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.457 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.457 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.457 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.457 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:12.457 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.457 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.457 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.457 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.457 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.457 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.457 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:12.457 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.457 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.457 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.717 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.717 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.717 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.717 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.717 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:12.717 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.717 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.717 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.717 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.717 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.717 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.717 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:12.717 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.717 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.717 [2024-12-14 05:03:23.416862] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:12.717 [2024-12-14 05:03:23.416934] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:12.717 [2024-12-14 05:03:23.417028] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.718 [2024-12-14 05:03:23.417308] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:12.718 [2024-12-14 05:03:23.417362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:12.718 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.718 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 93907 00:14:12.718 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 93907 ']' 00:14:12.718 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 93907 00:14:12.718 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:12.718 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:12.718 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93907 00:14:12.718 killing process with pid 93907 00:14:12.718 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:12.718 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:12.718 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93907' 00:14:12.718 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 93907 00:14:12.718 [2024-12-14 05:03:23.462776] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:12.718 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 93907 00:14:12.718 [2024-12-14 05:03:23.502835] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:12.978 05:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:12.978 00:14:12.978 real 0m9.692s 00:14:12.978 user 0m16.475s 00:14:12.978 sys 0m2.161s 00:14:12.978 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:12.978 ************************************ 00:14:12.978 END TEST raid5f_state_function_test_sb 00:14:12.978 ************************************ 00:14:12.978 05:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.978 05:03:23 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:14:12.978 05:03:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:12.978 05:03:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:12.978 05:03:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:12.978 ************************************ 00:14:12.978 START TEST raid5f_superblock_test 00:14:12.978 ************************************ 00:14:12.978 05:03:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:14:12.978 05:03:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:12.978 05:03:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:12.978 05:03:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:12.978 05:03:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:12.978 05:03:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:12.978 05:03:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:12.978 05:03:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:12.978 05:03:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:12.978 05:03:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:12.978 05:03:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:12.978 05:03:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:12.978 05:03:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:12.978 05:03:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:12.978 05:03:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:12.978 05:03:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:12.978 05:03:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:12.978 05:03:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=94562 00:14:12.978 05:03:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:12.978 05:03:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 94562 00:14:12.978 05:03:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 94562 ']' 00:14:12.979 05:03:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.979 05:03:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:12.979 05:03:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.979 05:03:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:12.979 05:03:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.238 [2024-12-14 05:03:23.917593] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:13.238 [2024-12-14 05:03:23.917788] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94562 ] 00:14:13.238 [2024-12-14 05:03:24.076154] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.498 [2024-12-14 05:03:24.124598] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.499 [2024-12-14 05:03:24.167438] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:13.499 [2024-12-14 05:03:24.167474] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.078 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:14.078 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:14.078 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:14.078 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:14.078 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:14.078 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:14.078 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:14.078 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:14.078 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:14.078 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:14.078 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:14.078 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.078 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.078 malloc1 00:14:14.078 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.078 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:14.078 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.078 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.078 [2024-12-14 05:03:24.761842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:14.078 [2024-12-14 05:03:24.761989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.079 [2024-12-14 05:03:24.762027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:14.079 [2024-12-14 05:03:24.762062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.079 [2024-12-14 05:03:24.764240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.079 [2024-12-14 05:03:24.764316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:14.079 pt1 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.079 malloc2 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.079 [2024-12-14 05:03:24.801275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:14.079 [2024-12-14 05:03:24.801384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.079 [2024-12-14 05:03:24.801424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:14.079 [2024-12-14 05:03:24.801455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.079 [2024-12-14 05:03:24.803633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.079 [2024-12-14 05:03:24.803706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:14.079 pt2 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.079 malloc3 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.079 [2024-12-14 05:03:24.830122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:14.079 [2024-12-14 05:03:24.830263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.079 [2024-12-14 05:03:24.830301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:14.079 [2024-12-14 05:03:24.830334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.079 [2024-12-14 05:03:24.832428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.079 [2024-12-14 05:03:24.832500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:14.079 pt3 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.079 malloc4 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.079 [2024-12-14 05:03:24.862774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:14.079 [2024-12-14 05:03:24.862882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.079 [2024-12-14 05:03:24.862900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:14.079 [2024-12-14 05:03:24.862912] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.079 [2024-12-14 05:03:24.864948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.079 [2024-12-14 05:03:24.864988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:14.079 pt4 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.079 [2024-12-14 05:03:24.874827] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:14.079 [2024-12-14 05:03:24.876653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:14.079 [2024-12-14 05:03:24.876710] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:14.079 [2024-12-14 05:03:24.876769] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:14.079 [2024-12-14 05:03:24.876931] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:14.079 [2024-12-14 05:03:24.876955] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:14.079 [2024-12-14 05:03:24.877183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:14.079 [2024-12-14 05:03:24.877674] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:14.079 [2024-12-14 05:03:24.877694] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:14.079 [2024-12-14 05:03:24.877804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.079 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.079 "name": "raid_bdev1", 00:14:14.079 "uuid": "ba993208-3b76-4833-986e-8fd9625cbdd0", 00:14:14.079 "strip_size_kb": 64, 00:14:14.079 "state": "online", 00:14:14.079 "raid_level": "raid5f", 00:14:14.079 "superblock": true, 00:14:14.079 "num_base_bdevs": 4, 00:14:14.079 "num_base_bdevs_discovered": 4, 00:14:14.079 "num_base_bdevs_operational": 4, 00:14:14.079 "base_bdevs_list": [ 00:14:14.079 { 00:14:14.079 "name": "pt1", 00:14:14.079 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:14.079 "is_configured": true, 00:14:14.079 "data_offset": 2048, 00:14:14.079 "data_size": 63488 00:14:14.079 }, 00:14:14.080 { 00:14:14.080 "name": "pt2", 00:14:14.080 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:14.080 "is_configured": true, 00:14:14.080 "data_offset": 2048, 00:14:14.080 "data_size": 63488 00:14:14.080 }, 00:14:14.080 { 00:14:14.080 "name": "pt3", 00:14:14.080 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:14.080 "is_configured": true, 00:14:14.080 "data_offset": 2048, 00:14:14.080 "data_size": 63488 00:14:14.080 }, 00:14:14.080 { 00:14:14.080 "name": "pt4", 00:14:14.080 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:14.080 "is_configured": true, 00:14:14.080 "data_offset": 2048, 00:14:14.080 "data_size": 63488 00:14:14.080 } 00:14:14.080 ] 00:14:14.080 }' 00:14:14.080 05:03:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.080 05:03:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.678 [2024-12-14 05:03:25.347064] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:14.678 "name": "raid_bdev1", 00:14:14.678 "aliases": [ 00:14:14.678 "ba993208-3b76-4833-986e-8fd9625cbdd0" 00:14:14.678 ], 00:14:14.678 "product_name": "Raid Volume", 00:14:14.678 "block_size": 512, 00:14:14.678 "num_blocks": 190464, 00:14:14.678 "uuid": "ba993208-3b76-4833-986e-8fd9625cbdd0", 00:14:14.678 "assigned_rate_limits": { 00:14:14.678 "rw_ios_per_sec": 0, 00:14:14.678 "rw_mbytes_per_sec": 0, 00:14:14.678 "r_mbytes_per_sec": 0, 00:14:14.678 "w_mbytes_per_sec": 0 00:14:14.678 }, 00:14:14.678 "claimed": false, 00:14:14.678 "zoned": false, 00:14:14.678 "supported_io_types": { 00:14:14.678 "read": true, 00:14:14.678 "write": true, 00:14:14.678 "unmap": false, 00:14:14.678 "flush": false, 00:14:14.678 "reset": true, 00:14:14.678 "nvme_admin": false, 00:14:14.678 "nvme_io": false, 00:14:14.678 "nvme_io_md": false, 00:14:14.678 "write_zeroes": true, 00:14:14.678 "zcopy": false, 00:14:14.678 "get_zone_info": false, 00:14:14.678 "zone_management": false, 00:14:14.678 "zone_append": false, 00:14:14.678 "compare": false, 00:14:14.678 "compare_and_write": false, 00:14:14.678 "abort": false, 00:14:14.678 "seek_hole": false, 00:14:14.678 "seek_data": false, 00:14:14.678 "copy": false, 00:14:14.678 "nvme_iov_md": false 00:14:14.678 }, 00:14:14.678 "driver_specific": { 00:14:14.678 "raid": { 00:14:14.678 "uuid": "ba993208-3b76-4833-986e-8fd9625cbdd0", 00:14:14.678 "strip_size_kb": 64, 00:14:14.678 "state": "online", 00:14:14.678 "raid_level": "raid5f", 00:14:14.678 "superblock": true, 00:14:14.678 "num_base_bdevs": 4, 00:14:14.678 "num_base_bdevs_discovered": 4, 00:14:14.678 "num_base_bdevs_operational": 4, 00:14:14.678 "base_bdevs_list": [ 00:14:14.678 { 00:14:14.678 "name": "pt1", 00:14:14.678 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:14.678 "is_configured": true, 00:14:14.678 "data_offset": 2048, 00:14:14.678 "data_size": 63488 00:14:14.678 }, 00:14:14.678 { 00:14:14.678 "name": "pt2", 00:14:14.678 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:14.678 "is_configured": true, 00:14:14.678 "data_offset": 2048, 00:14:14.678 "data_size": 63488 00:14:14.678 }, 00:14:14.678 { 00:14:14.678 "name": "pt3", 00:14:14.678 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:14.678 "is_configured": true, 00:14:14.678 "data_offset": 2048, 00:14:14.678 "data_size": 63488 00:14:14.678 }, 00:14:14.678 { 00:14:14.678 "name": "pt4", 00:14:14.678 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:14.678 "is_configured": true, 00:14:14.678 "data_offset": 2048, 00:14:14.678 "data_size": 63488 00:14:14.678 } 00:14:14.678 ] 00:14:14.678 } 00:14:14.678 } 00:14:14.678 }' 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:14.678 pt2 00:14:14.678 pt3 00:14:14.678 pt4' 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.678 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.679 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.679 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:14.679 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.679 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.679 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.679 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.679 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.679 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.679 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:14.679 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.679 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.679 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.679 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.938 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.938 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.938 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.938 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.938 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:14.938 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.938 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.938 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.938 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.938 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.938 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:14.938 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:14.938 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.938 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.938 [2024-12-14 05:03:25.626565] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:14.938 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.938 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ba993208-3b76-4833-986e-8fd9625cbdd0 00:14:14.938 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ba993208-3b76-4833-986e-8fd9625cbdd0 ']' 00:14:14.938 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:14.938 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.939 [2024-12-14 05:03:25.654349] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:14.939 [2024-12-14 05:03:25.654375] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:14.939 [2024-12-14 05:03:25.654436] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:14.939 [2024-12-14 05:03:25.654510] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:14.939 [2024-12-14 05:03:25.654519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.939 [2024-12-14 05:03:25.806136] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:14.939 [2024-12-14 05:03:25.807993] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:14.939 [2024-12-14 05:03:25.808038] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:14.939 [2024-12-14 05:03:25.808066] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:14.939 [2024-12-14 05:03:25.808106] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:14.939 [2024-12-14 05:03:25.808144] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:14.939 [2024-12-14 05:03:25.808172] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:14.939 [2024-12-14 05:03:25.808188] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:14.939 [2024-12-14 05:03:25.808201] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:14.939 [2024-12-14 05:03:25.808211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:14:14.939 request: 00:14:14.939 { 00:14:14.939 "name": "raid_bdev1", 00:14:14.939 "raid_level": "raid5f", 00:14:14.939 "base_bdevs": [ 00:14:14.939 "malloc1", 00:14:14.939 "malloc2", 00:14:14.939 "malloc3", 00:14:14.939 "malloc4" 00:14:14.939 ], 00:14:14.939 "strip_size_kb": 64, 00:14:14.939 "superblock": false, 00:14:14.939 "method": "bdev_raid_create", 00:14:14.939 "req_id": 1 00:14:14.939 } 00:14:14.939 Got JSON-RPC error response 00:14:14.939 response: 00:14:14.939 { 00:14:14.939 "code": -17, 00:14:14.939 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:14.939 } 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:14.939 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.199 [2024-12-14 05:03:25.877976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:15.199 [2024-12-14 05:03:25.878073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.199 [2024-12-14 05:03:25.878110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:15.199 [2024-12-14 05:03:25.878136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.199 [2024-12-14 05:03:25.880342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.199 [2024-12-14 05:03:25.880417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:15.199 [2024-12-14 05:03:25.880509] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:15.199 [2024-12-14 05:03:25.880588] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:15.199 pt1 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.199 "name": "raid_bdev1", 00:14:15.199 "uuid": "ba993208-3b76-4833-986e-8fd9625cbdd0", 00:14:15.199 "strip_size_kb": 64, 00:14:15.199 "state": "configuring", 00:14:15.199 "raid_level": "raid5f", 00:14:15.199 "superblock": true, 00:14:15.199 "num_base_bdevs": 4, 00:14:15.199 "num_base_bdevs_discovered": 1, 00:14:15.199 "num_base_bdevs_operational": 4, 00:14:15.199 "base_bdevs_list": [ 00:14:15.199 { 00:14:15.199 "name": "pt1", 00:14:15.199 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:15.199 "is_configured": true, 00:14:15.199 "data_offset": 2048, 00:14:15.199 "data_size": 63488 00:14:15.199 }, 00:14:15.199 { 00:14:15.199 "name": null, 00:14:15.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:15.199 "is_configured": false, 00:14:15.199 "data_offset": 2048, 00:14:15.199 "data_size": 63488 00:14:15.199 }, 00:14:15.199 { 00:14:15.199 "name": null, 00:14:15.199 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:15.199 "is_configured": false, 00:14:15.199 "data_offset": 2048, 00:14:15.199 "data_size": 63488 00:14:15.199 }, 00:14:15.199 { 00:14:15.199 "name": null, 00:14:15.199 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:15.199 "is_configured": false, 00:14:15.199 "data_offset": 2048, 00:14:15.199 "data_size": 63488 00:14:15.199 } 00:14:15.199 ] 00:14:15.199 }' 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.199 05:03:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.768 [2024-12-14 05:03:26.369114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:15.768 [2024-12-14 05:03:26.369179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.768 [2024-12-14 05:03:26.369201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:15.768 [2024-12-14 05:03:26.369209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.768 [2024-12-14 05:03:26.369534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.768 [2024-12-14 05:03:26.369549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:15.768 [2024-12-14 05:03:26.369604] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:15.768 [2024-12-14 05:03:26.369621] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:15.768 pt2 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.768 [2024-12-14 05:03:26.381112] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.768 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.768 "name": "raid_bdev1", 00:14:15.768 "uuid": "ba993208-3b76-4833-986e-8fd9625cbdd0", 00:14:15.768 "strip_size_kb": 64, 00:14:15.768 "state": "configuring", 00:14:15.768 "raid_level": "raid5f", 00:14:15.768 "superblock": true, 00:14:15.768 "num_base_bdevs": 4, 00:14:15.768 "num_base_bdevs_discovered": 1, 00:14:15.768 "num_base_bdevs_operational": 4, 00:14:15.768 "base_bdevs_list": [ 00:14:15.768 { 00:14:15.768 "name": "pt1", 00:14:15.768 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:15.768 "is_configured": true, 00:14:15.768 "data_offset": 2048, 00:14:15.768 "data_size": 63488 00:14:15.768 }, 00:14:15.768 { 00:14:15.768 "name": null, 00:14:15.768 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:15.768 "is_configured": false, 00:14:15.768 "data_offset": 0, 00:14:15.768 "data_size": 63488 00:14:15.768 }, 00:14:15.768 { 00:14:15.768 "name": null, 00:14:15.768 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:15.768 "is_configured": false, 00:14:15.768 "data_offset": 2048, 00:14:15.768 "data_size": 63488 00:14:15.768 }, 00:14:15.768 { 00:14:15.768 "name": null, 00:14:15.768 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:15.768 "is_configured": false, 00:14:15.769 "data_offset": 2048, 00:14:15.769 "data_size": 63488 00:14:15.769 } 00:14:15.769 ] 00:14:15.769 }' 00:14:15.769 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.769 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.028 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:16.028 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:16.028 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:16.028 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.028 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.028 [2024-12-14 05:03:26.880237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:16.028 [2024-12-14 05:03:26.880358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.028 [2024-12-14 05:03:26.880389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:16.028 [2024-12-14 05:03:26.880417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.028 [2024-12-14 05:03:26.880759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.028 [2024-12-14 05:03:26.880822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:16.028 [2024-12-14 05:03:26.880901] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:16.028 [2024-12-14 05:03:26.880951] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:16.028 pt2 00:14:16.028 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.028 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:16.028 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:16.028 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:16.028 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.028 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.028 [2024-12-14 05:03:26.892205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:16.028 [2024-12-14 05:03:26.892314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.028 [2024-12-14 05:03:26.892345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:16.028 [2024-12-14 05:03:26.892373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.028 [2024-12-14 05:03:26.892688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.028 [2024-12-14 05:03:26.892747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:16.028 [2024-12-14 05:03:26.892827] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:16.028 [2024-12-14 05:03:26.892875] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:16.028 pt3 00:14:16.028 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.028 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:16.028 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:16.028 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:16.028 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.028 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.028 [2024-12-14 05:03:26.904180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:16.028 [2024-12-14 05:03:26.904287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.028 [2024-12-14 05:03:26.904320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:16.028 [2024-12-14 05:03:26.904348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.028 [2024-12-14 05:03:26.904677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.028 [2024-12-14 05:03:26.904737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:16.028 [2024-12-14 05:03:26.904820] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:16.028 [2024-12-14 05:03:26.904869] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:16.028 [2024-12-14 05:03:26.904985] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:16.028 [2024-12-14 05:03:26.905026] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:16.028 [2024-12-14 05:03:26.905290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:16.028 [2024-12-14 05:03:26.905817] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:16.028 [2024-12-14 05:03:26.905866] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:14:16.028 [2024-12-14 05:03:26.905999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.028 pt4 00:14:16.288 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.288 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:16.288 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:16.288 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:16.288 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.288 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.288 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.288 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.288 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.288 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.288 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.288 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.288 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.288 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.288 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.288 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.288 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.288 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.288 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.288 "name": "raid_bdev1", 00:14:16.288 "uuid": "ba993208-3b76-4833-986e-8fd9625cbdd0", 00:14:16.288 "strip_size_kb": 64, 00:14:16.288 "state": "online", 00:14:16.288 "raid_level": "raid5f", 00:14:16.288 "superblock": true, 00:14:16.288 "num_base_bdevs": 4, 00:14:16.288 "num_base_bdevs_discovered": 4, 00:14:16.288 "num_base_bdevs_operational": 4, 00:14:16.288 "base_bdevs_list": [ 00:14:16.288 { 00:14:16.288 "name": "pt1", 00:14:16.288 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:16.288 "is_configured": true, 00:14:16.288 "data_offset": 2048, 00:14:16.288 "data_size": 63488 00:14:16.288 }, 00:14:16.288 { 00:14:16.288 "name": "pt2", 00:14:16.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:16.288 "is_configured": true, 00:14:16.288 "data_offset": 2048, 00:14:16.288 "data_size": 63488 00:14:16.288 }, 00:14:16.288 { 00:14:16.288 "name": "pt3", 00:14:16.288 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:16.288 "is_configured": true, 00:14:16.288 "data_offset": 2048, 00:14:16.288 "data_size": 63488 00:14:16.288 }, 00:14:16.288 { 00:14:16.288 "name": "pt4", 00:14:16.288 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:16.288 "is_configured": true, 00:14:16.288 "data_offset": 2048, 00:14:16.288 "data_size": 63488 00:14:16.288 } 00:14:16.288 ] 00:14:16.288 }' 00:14:16.288 05:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.288 05:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.547 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:16.547 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:16.547 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:16.547 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:16.547 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:16.547 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:16.547 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:16.547 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:16.547 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.547 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.547 [2024-12-14 05:03:27.371581] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.547 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.547 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:16.547 "name": "raid_bdev1", 00:14:16.547 "aliases": [ 00:14:16.547 "ba993208-3b76-4833-986e-8fd9625cbdd0" 00:14:16.547 ], 00:14:16.547 "product_name": "Raid Volume", 00:14:16.547 "block_size": 512, 00:14:16.547 "num_blocks": 190464, 00:14:16.547 "uuid": "ba993208-3b76-4833-986e-8fd9625cbdd0", 00:14:16.547 "assigned_rate_limits": { 00:14:16.547 "rw_ios_per_sec": 0, 00:14:16.547 "rw_mbytes_per_sec": 0, 00:14:16.547 "r_mbytes_per_sec": 0, 00:14:16.547 "w_mbytes_per_sec": 0 00:14:16.547 }, 00:14:16.547 "claimed": false, 00:14:16.547 "zoned": false, 00:14:16.547 "supported_io_types": { 00:14:16.547 "read": true, 00:14:16.547 "write": true, 00:14:16.547 "unmap": false, 00:14:16.547 "flush": false, 00:14:16.547 "reset": true, 00:14:16.547 "nvme_admin": false, 00:14:16.547 "nvme_io": false, 00:14:16.547 "nvme_io_md": false, 00:14:16.547 "write_zeroes": true, 00:14:16.547 "zcopy": false, 00:14:16.547 "get_zone_info": false, 00:14:16.547 "zone_management": false, 00:14:16.547 "zone_append": false, 00:14:16.547 "compare": false, 00:14:16.547 "compare_and_write": false, 00:14:16.547 "abort": false, 00:14:16.547 "seek_hole": false, 00:14:16.547 "seek_data": false, 00:14:16.547 "copy": false, 00:14:16.547 "nvme_iov_md": false 00:14:16.547 }, 00:14:16.547 "driver_specific": { 00:14:16.547 "raid": { 00:14:16.547 "uuid": "ba993208-3b76-4833-986e-8fd9625cbdd0", 00:14:16.547 "strip_size_kb": 64, 00:14:16.547 "state": "online", 00:14:16.547 "raid_level": "raid5f", 00:14:16.547 "superblock": true, 00:14:16.547 "num_base_bdevs": 4, 00:14:16.547 "num_base_bdevs_discovered": 4, 00:14:16.547 "num_base_bdevs_operational": 4, 00:14:16.547 "base_bdevs_list": [ 00:14:16.547 { 00:14:16.547 "name": "pt1", 00:14:16.547 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:16.547 "is_configured": true, 00:14:16.547 "data_offset": 2048, 00:14:16.547 "data_size": 63488 00:14:16.547 }, 00:14:16.547 { 00:14:16.547 "name": "pt2", 00:14:16.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:16.547 "is_configured": true, 00:14:16.547 "data_offset": 2048, 00:14:16.547 "data_size": 63488 00:14:16.547 }, 00:14:16.547 { 00:14:16.547 "name": "pt3", 00:14:16.547 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:16.547 "is_configured": true, 00:14:16.547 "data_offset": 2048, 00:14:16.547 "data_size": 63488 00:14:16.547 }, 00:14:16.547 { 00:14:16.547 "name": "pt4", 00:14:16.547 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:16.547 "is_configured": true, 00:14:16.547 "data_offset": 2048, 00:14:16.547 "data_size": 63488 00:14:16.547 } 00:14:16.547 ] 00:14:16.547 } 00:14:16.547 } 00:14:16.547 }' 00:14:16.548 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:16.806 pt2 00:14:16.806 pt3 00:14:16.806 pt4' 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.806 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.065 [2024-12-14 05:03:27.703048] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ba993208-3b76-4833-986e-8fd9625cbdd0 '!=' ba993208-3b76-4833-986e-8fd9625cbdd0 ']' 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.065 [2024-12-14 05:03:27.750837] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.065 "name": "raid_bdev1", 00:14:17.065 "uuid": "ba993208-3b76-4833-986e-8fd9625cbdd0", 00:14:17.065 "strip_size_kb": 64, 00:14:17.065 "state": "online", 00:14:17.065 "raid_level": "raid5f", 00:14:17.065 "superblock": true, 00:14:17.065 "num_base_bdevs": 4, 00:14:17.065 "num_base_bdevs_discovered": 3, 00:14:17.065 "num_base_bdevs_operational": 3, 00:14:17.065 "base_bdevs_list": [ 00:14:17.065 { 00:14:17.065 "name": null, 00:14:17.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.065 "is_configured": false, 00:14:17.065 "data_offset": 0, 00:14:17.065 "data_size": 63488 00:14:17.065 }, 00:14:17.065 { 00:14:17.065 "name": "pt2", 00:14:17.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:17.065 "is_configured": true, 00:14:17.065 "data_offset": 2048, 00:14:17.065 "data_size": 63488 00:14:17.065 }, 00:14:17.065 { 00:14:17.065 "name": "pt3", 00:14:17.065 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:17.065 "is_configured": true, 00:14:17.065 "data_offset": 2048, 00:14:17.065 "data_size": 63488 00:14:17.065 }, 00:14:17.065 { 00:14:17.065 "name": "pt4", 00:14:17.065 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:17.065 "is_configured": true, 00:14:17.065 "data_offset": 2048, 00:14:17.065 "data_size": 63488 00:14:17.065 } 00:14:17.065 ] 00:14:17.065 }' 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.065 05:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.633 [2024-12-14 05:03:28.221973] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:17.633 [2024-12-14 05:03:28.222044] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:17.633 [2024-12-14 05:03:28.222135] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.633 [2024-12-14 05:03:28.222219] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:17.633 [2024-12-14 05:03:28.222255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.633 [2024-12-14 05:03:28.317808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:17.633 [2024-12-14 05:03:28.317860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.633 [2024-12-14 05:03:28.317875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:17.633 [2024-12-14 05:03:28.317884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.633 [2024-12-14 05:03:28.319982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.633 [2024-12-14 05:03:28.320023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:17.633 [2024-12-14 05:03:28.320079] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:17.633 [2024-12-14 05:03:28.320110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:17.633 pt2 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.633 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.634 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.634 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.634 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.634 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.634 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.634 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.634 "name": "raid_bdev1", 00:14:17.634 "uuid": "ba993208-3b76-4833-986e-8fd9625cbdd0", 00:14:17.634 "strip_size_kb": 64, 00:14:17.634 "state": "configuring", 00:14:17.634 "raid_level": "raid5f", 00:14:17.634 "superblock": true, 00:14:17.634 "num_base_bdevs": 4, 00:14:17.634 "num_base_bdevs_discovered": 1, 00:14:17.634 "num_base_bdevs_operational": 3, 00:14:17.634 "base_bdevs_list": [ 00:14:17.634 { 00:14:17.634 "name": null, 00:14:17.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.634 "is_configured": false, 00:14:17.634 "data_offset": 2048, 00:14:17.634 "data_size": 63488 00:14:17.634 }, 00:14:17.634 { 00:14:17.634 "name": "pt2", 00:14:17.634 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:17.634 "is_configured": true, 00:14:17.634 "data_offset": 2048, 00:14:17.634 "data_size": 63488 00:14:17.634 }, 00:14:17.634 { 00:14:17.634 "name": null, 00:14:17.634 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:17.634 "is_configured": false, 00:14:17.634 "data_offset": 2048, 00:14:17.634 "data_size": 63488 00:14:17.634 }, 00:14:17.634 { 00:14:17.634 "name": null, 00:14:17.634 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:17.634 "is_configured": false, 00:14:17.634 "data_offset": 2048, 00:14:17.634 "data_size": 63488 00:14:17.634 } 00:14:17.634 ] 00:14:17.634 }' 00:14:17.634 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.634 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.201 [2024-12-14 05:03:28.785026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:18.201 [2024-12-14 05:03:28.785136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.201 [2024-12-14 05:03:28.785153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:18.201 [2024-12-14 05:03:28.785164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.201 [2024-12-14 05:03:28.785489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.201 [2024-12-14 05:03:28.785510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:18.201 [2024-12-14 05:03:28.785559] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:18.201 [2024-12-14 05:03:28.785586] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:18.201 pt3 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.201 "name": "raid_bdev1", 00:14:18.201 "uuid": "ba993208-3b76-4833-986e-8fd9625cbdd0", 00:14:18.201 "strip_size_kb": 64, 00:14:18.201 "state": "configuring", 00:14:18.201 "raid_level": "raid5f", 00:14:18.201 "superblock": true, 00:14:18.201 "num_base_bdevs": 4, 00:14:18.201 "num_base_bdevs_discovered": 2, 00:14:18.201 "num_base_bdevs_operational": 3, 00:14:18.201 "base_bdevs_list": [ 00:14:18.201 { 00:14:18.201 "name": null, 00:14:18.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.201 "is_configured": false, 00:14:18.201 "data_offset": 2048, 00:14:18.201 "data_size": 63488 00:14:18.201 }, 00:14:18.201 { 00:14:18.201 "name": "pt2", 00:14:18.201 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:18.201 "is_configured": true, 00:14:18.201 "data_offset": 2048, 00:14:18.201 "data_size": 63488 00:14:18.201 }, 00:14:18.201 { 00:14:18.201 "name": "pt3", 00:14:18.201 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:18.201 "is_configured": true, 00:14:18.201 "data_offset": 2048, 00:14:18.201 "data_size": 63488 00:14:18.201 }, 00:14:18.201 { 00:14:18.201 "name": null, 00:14:18.201 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:18.201 "is_configured": false, 00:14:18.201 "data_offset": 2048, 00:14:18.201 "data_size": 63488 00:14:18.201 } 00:14:18.201 ] 00:14:18.201 }' 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.201 05:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.460 [2024-12-14 05:03:29.204316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:18.460 [2024-12-14 05:03:29.204420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.460 [2024-12-14 05:03:29.204469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:18.460 [2024-12-14 05:03:29.204509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.460 [2024-12-14 05:03:29.204830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.460 [2024-12-14 05:03:29.204889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:18.460 [2024-12-14 05:03:29.204969] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:18.460 [2024-12-14 05:03:29.205018] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:18.460 [2024-12-14 05:03:29.205126] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:18.460 [2024-12-14 05:03:29.205182] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:18.460 [2024-12-14 05:03:29.205434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:18.460 [2024-12-14 05:03:29.205952] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:18.460 [2024-12-14 05:03:29.206002] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:14:18.460 [2024-12-14 05:03:29.206237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.460 pt4 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.460 "name": "raid_bdev1", 00:14:18.460 "uuid": "ba993208-3b76-4833-986e-8fd9625cbdd0", 00:14:18.460 "strip_size_kb": 64, 00:14:18.460 "state": "online", 00:14:18.460 "raid_level": "raid5f", 00:14:18.460 "superblock": true, 00:14:18.460 "num_base_bdevs": 4, 00:14:18.460 "num_base_bdevs_discovered": 3, 00:14:18.460 "num_base_bdevs_operational": 3, 00:14:18.460 "base_bdevs_list": [ 00:14:18.460 { 00:14:18.460 "name": null, 00:14:18.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.460 "is_configured": false, 00:14:18.460 "data_offset": 2048, 00:14:18.460 "data_size": 63488 00:14:18.460 }, 00:14:18.460 { 00:14:18.460 "name": "pt2", 00:14:18.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:18.460 "is_configured": true, 00:14:18.460 "data_offset": 2048, 00:14:18.460 "data_size": 63488 00:14:18.460 }, 00:14:18.460 { 00:14:18.460 "name": "pt3", 00:14:18.460 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:18.460 "is_configured": true, 00:14:18.460 "data_offset": 2048, 00:14:18.460 "data_size": 63488 00:14:18.460 }, 00:14:18.460 { 00:14:18.460 "name": "pt4", 00:14:18.460 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:18.460 "is_configured": true, 00:14:18.460 "data_offset": 2048, 00:14:18.460 "data_size": 63488 00:14:18.460 } 00:14:18.460 ] 00:14:18.460 }' 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.460 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.027 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:19.027 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.027 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.027 [2024-12-14 05:03:29.639576] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:19.027 [2024-12-14 05:03:29.639654] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:19.027 [2024-12-14 05:03:29.639706] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.028 [2024-12-14 05:03:29.639768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:19.028 [2024-12-14 05:03:29.639793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.028 [2024-12-14 05:03:29.715460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:19.028 [2024-12-14 05:03:29.715512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.028 [2024-12-14 05:03:29.715529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:14:19.028 [2024-12-14 05:03:29.715538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.028 [2024-12-14 05:03:29.717595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.028 [2024-12-14 05:03:29.717681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:19.028 [2024-12-14 05:03:29.717742] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:19.028 [2024-12-14 05:03:29.717785] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:19.028 [2024-12-14 05:03:29.717882] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:19.028 [2024-12-14 05:03:29.717895] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:19.028 [2024-12-14 05:03:29.717911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:14:19.028 [2024-12-14 05:03:29.717947] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:19.028 [2024-12-14 05:03:29.718054] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:19.028 pt1 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.028 "name": "raid_bdev1", 00:14:19.028 "uuid": "ba993208-3b76-4833-986e-8fd9625cbdd0", 00:14:19.028 "strip_size_kb": 64, 00:14:19.028 "state": "configuring", 00:14:19.028 "raid_level": "raid5f", 00:14:19.028 "superblock": true, 00:14:19.028 "num_base_bdevs": 4, 00:14:19.028 "num_base_bdevs_discovered": 2, 00:14:19.028 "num_base_bdevs_operational": 3, 00:14:19.028 "base_bdevs_list": [ 00:14:19.028 { 00:14:19.028 "name": null, 00:14:19.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.028 "is_configured": false, 00:14:19.028 "data_offset": 2048, 00:14:19.028 "data_size": 63488 00:14:19.028 }, 00:14:19.028 { 00:14:19.028 "name": "pt2", 00:14:19.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:19.028 "is_configured": true, 00:14:19.028 "data_offset": 2048, 00:14:19.028 "data_size": 63488 00:14:19.028 }, 00:14:19.028 { 00:14:19.028 "name": "pt3", 00:14:19.028 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:19.028 "is_configured": true, 00:14:19.028 "data_offset": 2048, 00:14:19.028 "data_size": 63488 00:14:19.028 }, 00:14:19.028 { 00:14:19.028 "name": null, 00:14:19.028 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:19.028 "is_configured": false, 00:14:19.028 "data_offset": 2048, 00:14:19.028 "data_size": 63488 00:14:19.028 } 00:14:19.028 ] 00:14:19.028 }' 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.028 05:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.597 [2024-12-14 05:03:30.238576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:19.597 [2024-12-14 05:03:30.238674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.597 [2024-12-14 05:03:30.238705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:19.597 [2024-12-14 05:03:30.238733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.597 [2024-12-14 05:03:30.239085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.597 [2024-12-14 05:03:30.239146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:19.597 [2024-12-14 05:03:30.239242] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:19.597 [2024-12-14 05:03:30.239306] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:19.597 [2024-12-14 05:03:30.239414] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:19.597 [2024-12-14 05:03:30.239459] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:19.597 [2024-12-14 05:03:30.239695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:19.597 [2024-12-14 05:03:30.240269] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:19.597 [2024-12-14 05:03:30.240320] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:19.597 [2024-12-14 05:03:30.240528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.597 pt4 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.597 "name": "raid_bdev1", 00:14:19.597 "uuid": "ba993208-3b76-4833-986e-8fd9625cbdd0", 00:14:19.597 "strip_size_kb": 64, 00:14:19.597 "state": "online", 00:14:19.597 "raid_level": "raid5f", 00:14:19.597 "superblock": true, 00:14:19.597 "num_base_bdevs": 4, 00:14:19.597 "num_base_bdevs_discovered": 3, 00:14:19.597 "num_base_bdevs_operational": 3, 00:14:19.597 "base_bdevs_list": [ 00:14:19.597 { 00:14:19.597 "name": null, 00:14:19.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.597 "is_configured": false, 00:14:19.597 "data_offset": 2048, 00:14:19.597 "data_size": 63488 00:14:19.597 }, 00:14:19.597 { 00:14:19.597 "name": "pt2", 00:14:19.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:19.597 "is_configured": true, 00:14:19.597 "data_offset": 2048, 00:14:19.597 "data_size": 63488 00:14:19.597 }, 00:14:19.597 { 00:14:19.597 "name": "pt3", 00:14:19.597 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:19.597 "is_configured": true, 00:14:19.597 "data_offset": 2048, 00:14:19.597 "data_size": 63488 00:14:19.597 }, 00:14:19.597 { 00:14:19.597 "name": "pt4", 00:14:19.597 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:19.597 "is_configured": true, 00:14:19.597 "data_offset": 2048, 00:14:19.597 "data_size": 63488 00:14:19.597 } 00:14:19.597 ] 00:14:19.597 }' 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.597 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.856 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:19.856 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:19.856 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.856 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.856 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.856 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:19.856 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:19.856 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.856 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.856 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:19.856 [2024-12-14 05:03:30.733968] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.116 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.116 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' ba993208-3b76-4833-986e-8fd9625cbdd0 '!=' ba993208-3b76-4833-986e-8fd9625cbdd0 ']' 00:14:20.116 05:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 94562 00:14:20.116 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 94562 ']' 00:14:20.116 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 94562 00:14:20.116 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:20.116 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:20.116 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94562 00:14:20.116 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:20.116 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:20.116 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94562' 00:14:20.116 killing process with pid 94562 00:14:20.116 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 94562 00:14:20.116 [2024-12-14 05:03:30.823541] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:20.116 [2024-12-14 05:03:30.823609] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:20.116 [2024-12-14 05:03:30.823675] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:20.116 [2024-12-14 05:03:30.823685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:20.116 05:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 94562 00:14:20.116 [2024-12-14 05:03:30.867070] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:20.377 05:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:20.377 ************************************ 00:14:20.377 END TEST raid5f_superblock_test 00:14:20.377 ************************************ 00:14:20.377 00:14:20.377 real 0m7.293s 00:14:20.377 user 0m12.237s 00:14:20.377 sys 0m1.598s 00:14:20.377 05:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:20.377 05:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.377 05:03:31 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:20.377 05:03:31 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:14:20.377 05:03:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:20.377 05:03:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:20.377 05:03:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:20.377 ************************************ 00:14:20.377 START TEST raid5f_rebuild_test 00:14:20.377 ************************************ 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=95031 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 95031 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 95031 ']' 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:20.377 05:03:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.637 [2024-12-14 05:03:31.312061] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:20.637 [2024-12-14 05:03:31.312347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95031 ] 00:14:20.637 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:20.637 Zero copy mechanism will not be used. 00:14:20.637 [2024-12-14 05:03:31.476919] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.896 [2024-12-14 05:03:31.522808] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.896 [2024-12-14 05:03:31.565733] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.896 [2024-12-14 05:03:31.565849] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.465 BaseBdev1_malloc 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.465 [2024-12-14 05:03:32.148308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:21.465 [2024-12-14 05:03:32.148371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.465 [2024-12-14 05:03:32.148405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:21.465 [2024-12-14 05:03:32.148421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.465 [2024-12-14 05:03:32.150554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.465 [2024-12-14 05:03:32.150590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:21.465 BaseBdev1 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.465 BaseBdev2_malloc 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.465 [2024-12-14 05:03:32.191514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:21.465 [2024-12-14 05:03:32.191618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.465 [2024-12-14 05:03:32.191664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:21.465 [2024-12-14 05:03:32.191686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.465 [2024-12-14 05:03:32.196430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.465 [2024-12-14 05:03:32.196499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:21.465 BaseBdev2 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.465 BaseBdev3_malloc 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.465 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.465 [2024-12-14 05:03:32.222727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:21.466 [2024-12-14 05:03:32.222777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.466 [2024-12-14 05:03:32.222801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:21.466 [2024-12-14 05:03:32.222809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.466 [2024-12-14 05:03:32.224854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.466 [2024-12-14 05:03:32.224893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:21.466 BaseBdev3 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.466 BaseBdev4_malloc 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.466 [2024-12-14 05:03:32.251612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:21.466 [2024-12-14 05:03:32.251668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.466 [2024-12-14 05:03:32.251693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:21.466 [2024-12-14 05:03:32.251701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.466 [2024-12-14 05:03:32.253683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.466 [2024-12-14 05:03:32.253719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:21.466 BaseBdev4 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.466 spare_malloc 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.466 spare_delay 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.466 [2024-12-14 05:03:32.292265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:21.466 [2024-12-14 05:03:32.292317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.466 [2024-12-14 05:03:32.292339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:21.466 [2024-12-14 05:03:32.292347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.466 [2024-12-14 05:03:32.294362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.466 [2024-12-14 05:03:32.294482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:21.466 spare 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.466 [2024-12-14 05:03:32.304322] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:21.466 [2024-12-14 05:03:32.306131] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:21.466 [2024-12-14 05:03:32.306216] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:21.466 [2024-12-14 05:03:32.306255] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:21.466 [2024-12-14 05:03:32.306332] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:21.466 [2024-12-14 05:03:32.306341] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:21.466 [2024-12-14 05:03:32.306609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:21.466 [2024-12-14 05:03:32.307038] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:21.466 [2024-12-14 05:03:32.307052] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:21.466 [2024-12-14 05:03:32.307168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.466 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.725 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.725 "name": "raid_bdev1", 00:14:21.725 "uuid": "d40d3313-d521-483c-84bd-b10dbe025306", 00:14:21.725 "strip_size_kb": 64, 00:14:21.725 "state": "online", 00:14:21.725 "raid_level": "raid5f", 00:14:21.725 "superblock": false, 00:14:21.725 "num_base_bdevs": 4, 00:14:21.725 "num_base_bdevs_discovered": 4, 00:14:21.725 "num_base_bdevs_operational": 4, 00:14:21.725 "base_bdevs_list": [ 00:14:21.725 { 00:14:21.725 "name": "BaseBdev1", 00:14:21.725 "uuid": "888234e1-f82b-5309-966d-8b82ab9c091b", 00:14:21.725 "is_configured": true, 00:14:21.725 "data_offset": 0, 00:14:21.725 "data_size": 65536 00:14:21.725 }, 00:14:21.725 { 00:14:21.725 "name": "BaseBdev2", 00:14:21.725 "uuid": "1023130c-73fd-599e-8160-d4e04518809b", 00:14:21.725 "is_configured": true, 00:14:21.725 "data_offset": 0, 00:14:21.725 "data_size": 65536 00:14:21.725 }, 00:14:21.725 { 00:14:21.725 "name": "BaseBdev3", 00:14:21.725 "uuid": "9048c45c-ac7f-5844-90bf-4c7002bb8e3e", 00:14:21.725 "is_configured": true, 00:14:21.725 "data_offset": 0, 00:14:21.725 "data_size": 65536 00:14:21.725 }, 00:14:21.725 { 00:14:21.725 "name": "BaseBdev4", 00:14:21.725 "uuid": "47d7cb78-a685-5522-8383-3c448e5fa5ed", 00:14:21.725 "is_configured": true, 00:14:21.725 "data_offset": 0, 00:14:21.725 "data_size": 65536 00:14:21.725 } 00:14:21.725 ] 00:14:21.725 }' 00:14:21.726 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.726 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.985 [2024-12-14 05:03:32.764427] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.985 05:03:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:22.244 [2024-12-14 05:03:33.031851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:22.244 /dev/nbd0 00:14:22.244 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:22.244 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:22.244 05:03:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:22.244 05:03:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:22.244 05:03:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:22.244 05:03:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:22.244 05:03:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:22.244 05:03:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:22.244 05:03:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:22.244 05:03:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:22.244 05:03:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.244 1+0 records in 00:14:22.244 1+0 records out 00:14:22.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436504 s, 9.4 MB/s 00:14:22.244 05:03:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.244 05:03:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:22.245 05:03:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.245 05:03:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:22.245 05:03:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:22.245 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:22.245 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:22.245 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:22.245 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:22.245 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:22.245 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:14:22.813 512+0 records in 00:14:22.813 512+0 records out 00:14:22.813 100663296 bytes (101 MB, 96 MiB) copied, 0.496952 s, 203 MB/s 00:14:22.813 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:22.813 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.813 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:22.813 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:22.813 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:22.813 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:22.813 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:23.073 [2024-12-14 05:03:33.828390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.073 [2024-12-14 05:03:33.844414] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.073 "name": "raid_bdev1", 00:14:23.073 "uuid": "d40d3313-d521-483c-84bd-b10dbe025306", 00:14:23.073 "strip_size_kb": 64, 00:14:23.073 "state": "online", 00:14:23.073 "raid_level": "raid5f", 00:14:23.073 "superblock": false, 00:14:23.073 "num_base_bdevs": 4, 00:14:23.073 "num_base_bdevs_discovered": 3, 00:14:23.073 "num_base_bdevs_operational": 3, 00:14:23.073 "base_bdevs_list": [ 00:14:23.073 { 00:14:23.073 "name": null, 00:14:23.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.073 "is_configured": false, 00:14:23.073 "data_offset": 0, 00:14:23.073 "data_size": 65536 00:14:23.073 }, 00:14:23.073 { 00:14:23.073 "name": "BaseBdev2", 00:14:23.073 "uuid": "1023130c-73fd-599e-8160-d4e04518809b", 00:14:23.073 "is_configured": true, 00:14:23.073 "data_offset": 0, 00:14:23.073 "data_size": 65536 00:14:23.073 }, 00:14:23.073 { 00:14:23.073 "name": "BaseBdev3", 00:14:23.073 "uuid": "9048c45c-ac7f-5844-90bf-4c7002bb8e3e", 00:14:23.073 "is_configured": true, 00:14:23.073 "data_offset": 0, 00:14:23.073 "data_size": 65536 00:14:23.073 }, 00:14:23.073 { 00:14:23.073 "name": "BaseBdev4", 00:14:23.073 "uuid": "47d7cb78-a685-5522-8383-3c448e5fa5ed", 00:14:23.073 "is_configured": true, 00:14:23.073 "data_offset": 0, 00:14:23.073 "data_size": 65536 00:14:23.073 } 00:14:23.073 ] 00:14:23.073 }' 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.073 05:03:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.642 05:03:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:23.642 05:03:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.642 05:03:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.642 [2024-12-14 05:03:34.291660] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:23.642 [2024-12-14 05:03:34.295031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:14:23.642 [2024-12-14 05:03:34.297300] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:23.642 05:03:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.642 05:03:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:24.579 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.579 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.579 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.579 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.579 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.579 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.579 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.579 05:03:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.579 05:03:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.579 05:03:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.579 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.579 "name": "raid_bdev1", 00:14:24.579 "uuid": "d40d3313-d521-483c-84bd-b10dbe025306", 00:14:24.579 "strip_size_kb": 64, 00:14:24.579 "state": "online", 00:14:24.579 "raid_level": "raid5f", 00:14:24.579 "superblock": false, 00:14:24.579 "num_base_bdevs": 4, 00:14:24.579 "num_base_bdevs_discovered": 4, 00:14:24.579 "num_base_bdevs_operational": 4, 00:14:24.579 "process": { 00:14:24.579 "type": "rebuild", 00:14:24.579 "target": "spare", 00:14:24.579 "progress": { 00:14:24.579 "blocks": 19200, 00:14:24.579 "percent": 9 00:14:24.579 } 00:14:24.579 }, 00:14:24.579 "base_bdevs_list": [ 00:14:24.579 { 00:14:24.579 "name": "spare", 00:14:24.579 "uuid": "57dff2bf-23a1-5bed-94f6-8a70a21a2949", 00:14:24.579 "is_configured": true, 00:14:24.579 "data_offset": 0, 00:14:24.579 "data_size": 65536 00:14:24.579 }, 00:14:24.579 { 00:14:24.579 "name": "BaseBdev2", 00:14:24.579 "uuid": "1023130c-73fd-599e-8160-d4e04518809b", 00:14:24.579 "is_configured": true, 00:14:24.579 "data_offset": 0, 00:14:24.579 "data_size": 65536 00:14:24.579 }, 00:14:24.579 { 00:14:24.579 "name": "BaseBdev3", 00:14:24.579 "uuid": "9048c45c-ac7f-5844-90bf-4c7002bb8e3e", 00:14:24.579 "is_configured": true, 00:14:24.579 "data_offset": 0, 00:14:24.579 "data_size": 65536 00:14:24.579 }, 00:14:24.579 { 00:14:24.579 "name": "BaseBdev4", 00:14:24.579 "uuid": "47d7cb78-a685-5522-8383-3c448e5fa5ed", 00:14:24.579 "is_configured": true, 00:14:24.579 "data_offset": 0, 00:14:24.579 "data_size": 65536 00:14:24.579 } 00:14:24.579 ] 00:14:24.579 }' 00:14:24.579 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.579 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.579 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.579 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.579 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:24.579 05:03:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.579 05:03:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.838 [2024-12-14 05:03:35.463884] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:24.838 [2024-12-14 05:03:35.502596] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:24.838 [2024-12-14 05:03:35.502649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.838 [2024-12-14 05:03:35.502668] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:24.838 [2024-12-14 05:03:35.502675] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:24.838 05:03:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.838 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:24.838 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.838 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.838 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.838 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.838 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.838 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.838 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.838 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.838 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.838 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.838 05:03:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.838 05:03:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.838 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.838 05:03:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.838 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.838 "name": "raid_bdev1", 00:14:24.838 "uuid": "d40d3313-d521-483c-84bd-b10dbe025306", 00:14:24.838 "strip_size_kb": 64, 00:14:24.838 "state": "online", 00:14:24.838 "raid_level": "raid5f", 00:14:24.838 "superblock": false, 00:14:24.838 "num_base_bdevs": 4, 00:14:24.838 "num_base_bdevs_discovered": 3, 00:14:24.838 "num_base_bdevs_operational": 3, 00:14:24.838 "base_bdevs_list": [ 00:14:24.838 { 00:14:24.838 "name": null, 00:14:24.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.838 "is_configured": false, 00:14:24.838 "data_offset": 0, 00:14:24.838 "data_size": 65536 00:14:24.838 }, 00:14:24.838 { 00:14:24.838 "name": "BaseBdev2", 00:14:24.838 "uuid": "1023130c-73fd-599e-8160-d4e04518809b", 00:14:24.838 "is_configured": true, 00:14:24.838 "data_offset": 0, 00:14:24.838 "data_size": 65536 00:14:24.838 }, 00:14:24.838 { 00:14:24.838 "name": "BaseBdev3", 00:14:24.838 "uuid": "9048c45c-ac7f-5844-90bf-4c7002bb8e3e", 00:14:24.838 "is_configured": true, 00:14:24.838 "data_offset": 0, 00:14:24.838 "data_size": 65536 00:14:24.838 }, 00:14:24.838 { 00:14:24.838 "name": "BaseBdev4", 00:14:24.838 "uuid": "47d7cb78-a685-5522-8383-3c448e5fa5ed", 00:14:24.838 "is_configured": true, 00:14:24.838 "data_offset": 0, 00:14:24.838 "data_size": 65536 00:14:24.838 } 00:14:24.838 ] 00:14:24.838 }' 00:14:24.838 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.838 05:03:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.097 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:25.097 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.097 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:25.097 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:25.097 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.097 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.097 05:03:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.097 05:03:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.097 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.097 05:03:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.097 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.097 "name": "raid_bdev1", 00:14:25.097 "uuid": "d40d3313-d521-483c-84bd-b10dbe025306", 00:14:25.097 "strip_size_kb": 64, 00:14:25.097 "state": "online", 00:14:25.097 "raid_level": "raid5f", 00:14:25.097 "superblock": false, 00:14:25.097 "num_base_bdevs": 4, 00:14:25.097 "num_base_bdevs_discovered": 3, 00:14:25.097 "num_base_bdevs_operational": 3, 00:14:25.097 "base_bdevs_list": [ 00:14:25.097 { 00:14:25.097 "name": null, 00:14:25.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.097 "is_configured": false, 00:14:25.097 "data_offset": 0, 00:14:25.097 "data_size": 65536 00:14:25.097 }, 00:14:25.097 { 00:14:25.097 "name": "BaseBdev2", 00:14:25.097 "uuid": "1023130c-73fd-599e-8160-d4e04518809b", 00:14:25.097 "is_configured": true, 00:14:25.097 "data_offset": 0, 00:14:25.097 "data_size": 65536 00:14:25.097 }, 00:14:25.097 { 00:14:25.097 "name": "BaseBdev3", 00:14:25.097 "uuid": "9048c45c-ac7f-5844-90bf-4c7002bb8e3e", 00:14:25.097 "is_configured": true, 00:14:25.097 "data_offset": 0, 00:14:25.097 "data_size": 65536 00:14:25.097 }, 00:14:25.097 { 00:14:25.097 "name": "BaseBdev4", 00:14:25.097 "uuid": "47d7cb78-a685-5522-8383-3c448e5fa5ed", 00:14:25.097 "is_configured": true, 00:14:25.097 "data_offset": 0, 00:14:25.097 "data_size": 65536 00:14:25.097 } 00:14:25.097 ] 00:14:25.097 }' 00:14:25.097 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.356 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:25.356 05:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.356 05:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:25.356 05:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:25.356 05:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.356 05:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.356 [2024-12-14 05:03:36.038884] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:25.356 [2024-12-14 05:03:36.041760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:14:25.356 [2024-12-14 05:03:36.043988] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:25.356 05:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.356 05:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:26.293 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.293 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.293 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.293 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.293 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.293 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.293 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.293 05:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.293 05:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.293 05:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.293 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.293 "name": "raid_bdev1", 00:14:26.293 "uuid": "d40d3313-d521-483c-84bd-b10dbe025306", 00:14:26.293 "strip_size_kb": 64, 00:14:26.293 "state": "online", 00:14:26.293 "raid_level": "raid5f", 00:14:26.293 "superblock": false, 00:14:26.293 "num_base_bdevs": 4, 00:14:26.293 "num_base_bdevs_discovered": 4, 00:14:26.293 "num_base_bdevs_operational": 4, 00:14:26.293 "process": { 00:14:26.293 "type": "rebuild", 00:14:26.293 "target": "spare", 00:14:26.293 "progress": { 00:14:26.293 "blocks": 19200, 00:14:26.293 "percent": 9 00:14:26.293 } 00:14:26.293 }, 00:14:26.293 "base_bdevs_list": [ 00:14:26.293 { 00:14:26.293 "name": "spare", 00:14:26.293 "uuid": "57dff2bf-23a1-5bed-94f6-8a70a21a2949", 00:14:26.293 "is_configured": true, 00:14:26.293 "data_offset": 0, 00:14:26.293 "data_size": 65536 00:14:26.293 }, 00:14:26.293 { 00:14:26.293 "name": "BaseBdev2", 00:14:26.293 "uuid": "1023130c-73fd-599e-8160-d4e04518809b", 00:14:26.293 "is_configured": true, 00:14:26.293 "data_offset": 0, 00:14:26.293 "data_size": 65536 00:14:26.293 }, 00:14:26.293 { 00:14:26.293 "name": "BaseBdev3", 00:14:26.293 "uuid": "9048c45c-ac7f-5844-90bf-4c7002bb8e3e", 00:14:26.293 "is_configured": true, 00:14:26.293 "data_offset": 0, 00:14:26.293 "data_size": 65536 00:14:26.293 }, 00:14:26.293 { 00:14:26.293 "name": "BaseBdev4", 00:14:26.293 "uuid": "47d7cb78-a685-5522-8383-3c448e5fa5ed", 00:14:26.293 "is_configured": true, 00:14:26.293 "data_offset": 0, 00:14:26.293 "data_size": 65536 00:14:26.293 } 00:14:26.293 ] 00:14:26.293 }' 00:14:26.293 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.293 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.293 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.553 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.553 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:26.553 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:26.553 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:26.553 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=508 00:14:26.553 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:26.553 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.553 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.553 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.553 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.553 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.553 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.553 05:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.553 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.553 05:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.553 05:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.553 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.553 "name": "raid_bdev1", 00:14:26.553 "uuid": "d40d3313-d521-483c-84bd-b10dbe025306", 00:14:26.553 "strip_size_kb": 64, 00:14:26.553 "state": "online", 00:14:26.553 "raid_level": "raid5f", 00:14:26.553 "superblock": false, 00:14:26.553 "num_base_bdevs": 4, 00:14:26.553 "num_base_bdevs_discovered": 4, 00:14:26.553 "num_base_bdevs_operational": 4, 00:14:26.553 "process": { 00:14:26.553 "type": "rebuild", 00:14:26.553 "target": "spare", 00:14:26.553 "progress": { 00:14:26.553 "blocks": 21120, 00:14:26.553 "percent": 10 00:14:26.553 } 00:14:26.553 }, 00:14:26.553 "base_bdevs_list": [ 00:14:26.553 { 00:14:26.553 "name": "spare", 00:14:26.553 "uuid": "57dff2bf-23a1-5bed-94f6-8a70a21a2949", 00:14:26.553 "is_configured": true, 00:14:26.553 "data_offset": 0, 00:14:26.553 "data_size": 65536 00:14:26.553 }, 00:14:26.553 { 00:14:26.553 "name": "BaseBdev2", 00:14:26.553 "uuid": "1023130c-73fd-599e-8160-d4e04518809b", 00:14:26.553 "is_configured": true, 00:14:26.553 "data_offset": 0, 00:14:26.553 "data_size": 65536 00:14:26.553 }, 00:14:26.553 { 00:14:26.553 "name": "BaseBdev3", 00:14:26.553 "uuid": "9048c45c-ac7f-5844-90bf-4c7002bb8e3e", 00:14:26.553 "is_configured": true, 00:14:26.553 "data_offset": 0, 00:14:26.553 "data_size": 65536 00:14:26.553 }, 00:14:26.553 { 00:14:26.553 "name": "BaseBdev4", 00:14:26.553 "uuid": "47d7cb78-a685-5522-8383-3c448e5fa5ed", 00:14:26.553 "is_configured": true, 00:14:26.553 "data_offset": 0, 00:14:26.553 "data_size": 65536 00:14:26.553 } 00:14:26.553 ] 00:14:26.553 }' 00:14:26.553 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.553 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.553 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.553 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.553 05:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:27.490 05:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:27.490 05:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.490 05:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.490 05:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.490 05:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.490 05:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.490 05:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.490 05:03:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.490 05:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.490 05:03:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.490 05:03:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.750 05:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.750 "name": "raid_bdev1", 00:14:27.750 "uuid": "d40d3313-d521-483c-84bd-b10dbe025306", 00:14:27.750 "strip_size_kb": 64, 00:14:27.750 "state": "online", 00:14:27.750 "raid_level": "raid5f", 00:14:27.750 "superblock": false, 00:14:27.750 "num_base_bdevs": 4, 00:14:27.750 "num_base_bdevs_discovered": 4, 00:14:27.750 "num_base_bdevs_operational": 4, 00:14:27.750 "process": { 00:14:27.750 "type": "rebuild", 00:14:27.750 "target": "spare", 00:14:27.750 "progress": { 00:14:27.750 "blocks": 44160, 00:14:27.750 "percent": 22 00:14:27.750 } 00:14:27.750 }, 00:14:27.750 "base_bdevs_list": [ 00:14:27.750 { 00:14:27.750 "name": "spare", 00:14:27.750 "uuid": "57dff2bf-23a1-5bed-94f6-8a70a21a2949", 00:14:27.750 "is_configured": true, 00:14:27.750 "data_offset": 0, 00:14:27.750 "data_size": 65536 00:14:27.750 }, 00:14:27.750 { 00:14:27.750 "name": "BaseBdev2", 00:14:27.750 "uuid": "1023130c-73fd-599e-8160-d4e04518809b", 00:14:27.750 "is_configured": true, 00:14:27.750 "data_offset": 0, 00:14:27.750 "data_size": 65536 00:14:27.750 }, 00:14:27.750 { 00:14:27.750 "name": "BaseBdev3", 00:14:27.750 "uuid": "9048c45c-ac7f-5844-90bf-4c7002bb8e3e", 00:14:27.750 "is_configured": true, 00:14:27.750 "data_offset": 0, 00:14:27.750 "data_size": 65536 00:14:27.750 }, 00:14:27.750 { 00:14:27.750 "name": "BaseBdev4", 00:14:27.750 "uuid": "47d7cb78-a685-5522-8383-3c448e5fa5ed", 00:14:27.750 "is_configured": true, 00:14:27.750 "data_offset": 0, 00:14:27.750 "data_size": 65536 00:14:27.750 } 00:14:27.750 ] 00:14:27.750 }' 00:14:27.750 05:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.750 05:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.750 05:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.750 05:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.750 05:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:28.687 05:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:28.687 05:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.687 05:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.687 05:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.687 05:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.687 05:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.687 05:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.687 05:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.687 05:03:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.687 05:03:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.687 05:03:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.687 05:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.687 "name": "raid_bdev1", 00:14:28.687 "uuid": "d40d3313-d521-483c-84bd-b10dbe025306", 00:14:28.687 "strip_size_kb": 64, 00:14:28.687 "state": "online", 00:14:28.687 "raid_level": "raid5f", 00:14:28.687 "superblock": false, 00:14:28.687 "num_base_bdevs": 4, 00:14:28.687 "num_base_bdevs_discovered": 4, 00:14:28.687 "num_base_bdevs_operational": 4, 00:14:28.687 "process": { 00:14:28.687 "type": "rebuild", 00:14:28.687 "target": "spare", 00:14:28.687 "progress": { 00:14:28.687 "blocks": 65280, 00:14:28.687 "percent": 33 00:14:28.687 } 00:14:28.687 }, 00:14:28.687 "base_bdevs_list": [ 00:14:28.687 { 00:14:28.687 "name": "spare", 00:14:28.687 "uuid": "57dff2bf-23a1-5bed-94f6-8a70a21a2949", 00:14:28.687 "is_configured": true, 00:14:28.687 "data_offset": 0, 00:14:28.687 "data_size": 65536 00:14:28.687 }, 00:14:28.687 { 00:14:28.687 "name": "BaseBdev2", 00:14:28.687 "uuid": "1023130c-73fd-599e-8160-d4e04518809b", 00:14:28.687 "is_configured": true, 00:14:28.687 "data_offset": 0, 00:14:28.687 "data_size": 65536 00:14:28.687 }, 00:14:28.687 { 00:14:28.687 "name": "BaseBdev3", 00:14:28.687 "uuid": "9048c45c-ac7f-5844-90bf-4c7002bb8e3e", 00:14:28.687 "is_configured": true, 00:14:28.687 "data_offset": 0, 00:14:28.687 "data_size": 65536 00:14:28.687 }, 00:14:28.687 { 00:14:28.687 "name": "BaseBdev4", 00:14:28.687 "uuid": "47d7cb78-a685-5522-8383-3c448e5fa5ed", 00:14:28.687 "is_configured": true, 00:14:28.687 "data_offset": 0, 00:14:28.687 "data_size": 65536 00:14:28.687 } 00:14:28.687 ] 00:14:28.687 }' 00:14:28.687 05:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.946 05:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.946 05:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.946 05:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.946 05:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:29.883 05:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.883 05:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.883 05:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.883 05:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.883 05:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.883 05:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.883 05:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.883 05:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.883 05:03:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.883 05:03:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.883 05:03:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.883 05:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.883 "name": "raid_bdev1", 00:14:29.883 "uuid": "d40d3313-d521-483c-84bd-b10dbe025306", 00:14:29.883 "strip_size_kb": 64, 00:14:29.883 "state": "online", 00:14:29.883 "raid_level": "raid5f", 00:14:29.883 "superblock": false, 00:14:29.883 "num_base_bdevs": 4, 00:14:29.883 "num_base_bdevs_discovered": 4, 00:14:29.883 "num_base_bdevs_operational": 4, 00:14:29.883 "process": { 00:14:29.883 "type": "rebuild", 00:14:29.883 "target": "spare", 00:14:29.883 "progress": { 00:14:29.883 "blocks": 86400, 00:14:29.883 "percent": 43 00:14:29.883 } 00:14:29.883 }, 00:14:29.883 "base_bdevs_list": [ 00:14:29.883 { 00:14:29.883 "name": "spare", 00:14:29.883 "uuid": "57dff2bf-23a1-5bed-94f6-8a70a21a2949", 00:14:29.883 "is_configured": true, 00:14:29.883 "data_offset": 0, 00:14:29.883 "data_size": 65536 00:14:29.883 }, 00:14:29.883 { 00:14:29.883 "name": "BaseBdev2", 00:14:29.884 "uuid": "1023130c-73fd-599e-8160-d4e04518809b", 00:14:29.884 "is_configured": true, 00:14:29.884 "data_offset": 0, 00:14:29.884 "data_size": 65536 00:14:29.884 }, 00:14:29.884 { 00:14:29.884 "name": "BaseBdev3", 00:14:29.884 "uuid": "9048c45c-ac7f-5844-90bf-4c7002bb8e3e", 00:14:29.884 "is_configured": true, 00:14:29.884 "data_offset": 0, 00:14:29.884 "data_size": 65536 00:14:29.884 }, 00:14:29.884 { 00:14:29.884 "name": "BaseBdev4", 00:14:29.884 "uuid": "47d7cb78-a685-5522-8383-3c448e5fa5ed", 00:14:29.884 "is_configured": true, 00:14:29.884 "data_offset": 0, 00:14:29.884 "data_size": 65536 00:14:29.884 } 00:14:29.884 ] 00:14:29.884 }' 00:14:29.884 05:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.884 05:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.884 05:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.884 05:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.884 05:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:31.263 05:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:31.263 05:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.263 05:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.263 05:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.263 05:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.263 05:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.263 05:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.263 05:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.263 05:03:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.263 05:03:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.263 05:03:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.263 05:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.263 "name": "raid_bdev1", 00:14:31.263 "uuid": "d40d3313-d521-483c-84bd-b10dbe025306", 00:14:31.263 "strip_size_kb": 64, 00:14:31.263 "state": "online", 00:14:31.263 "raid_level": "raid5f", 00:14:31.263 "superblock": false, 00:14:31.263 "num_base_bdevs": 4, 00:14:31.263 "num_base_bdevs_discovered": 4, 00:14:31.263 "num_base_bdevs_operational": 4, 00:14:31.263 "process": { 00:14:31.263 "type": "rebuild", 00:14:31.263 "target": "spare", 00:14:31.263 "progress": { 00:14:31.263 "blocks": 109440, 00:14:31.263 "percent": 55 00:14:31.263 } 00:14:31.263 }, 00:14:31.263 "base_bdevs_list": [ 00:14:31.263 { 00:14:31.263 "name": "spare", 00:14:31.263 "uuid": "57dff2bf-23a1-5bed-94f6-8a70a21a2949", 00:14:31.263 "is_configured": true, 00:14:31.263 "data_offset": 0, 00:14:31.263 "data_size": 65536 00:14:31.263 }, 00:14:31.263 { 00:14:31.263 "name": "BaseBdev2", 00:14:31.263 "uuid": "1023130c-73fd-599e-8160-d4e04518809b", 00:14:31.263 "is_configured": true, 00:14:31.263 "data_offset": 0, 00:14:31.263 "data_size": 65536 00:14:31.263 }, 00:14:31.263 { 00:14:31.263 "name": "BaseBdev3", 00:14:31.263 "uuid": "9048c45c-ac7f-5844-90bf-4c7002bb8e3e", 00:14:31.263 "is_configured": true, 00:14:31.263 "data_offset": 0, 00:14:31.263 "data_size": 65536 00:14:31.263 }, 00:14:31.263 { 00:14:31.263 "name": "BaseBdev4", 00:14:31.263 "uuid": "47d7cb78-a685-5522-8383-3c448e5fa5ed", 00:14:31.263 "is_configured": true, 00:14:31.263 "data_offset": 0, 00:14:31.263 "data_size": 65536 00:14:31.263 } 00:14:31.263 ] 00:14:31.263 }' 00:14:31.263 05:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.263 05:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.263 05:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.263 05:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.263 05:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:32.200 05:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:32.200 05:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:32.200 05:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.200 05:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:32.200 05:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:32.200 05:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.200 05:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.200 05:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.200 05:03:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.200 05:03:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.200 05:03:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.201 05:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.201 "name": "raid_bdev1", 00:14:32.201 "uuid": "d40d3313-d521-483c-84bd-b10dbe025306", 00:14:32.201 "strip_size_kb": 64, 00:14:32.201 "state": "online", 00:14:32.201 "raid_level": "raid5f", 00:14:32.201 "superblock": false, 00:14:32.201 "num_base_bdevs": 4, 00:14:32.201 "num_base_bdevs_discovered": 4, 00:14:32.201 "num_base_bdevs_operational": 4, 00:14:32.201 "process": { 00:14:32.201 "type": "rebuild", 00:14:32.201 "target": "spare", 00:14:32.201 "progress": { 00:14:32.201 "blocks": 130560, 00:14:32.201 "percent": 66 00:14:32.201 } 00:14:32.201 }, 00:14:32.201 "base_bdevs_list": [ 00:14:32.201 { 00:14:32.201 "name": "spare", 00:14:32.201 "uuid": "57dff2bf-23a1-5bed-94f6-8a70a21a2949", 00:14:32.201 "is_configured": true, 00:14:32.201 "data_offset": 0, 00:14:32.201 "data_size": 65536 00:14:32.201 }, 00:14:32.201 { 00:14:32.201 "name": "BaseBdev2", 00:14:32.201 "uuid": "1023130c-73fd-599e-8160-d4e04518809b", 00:14:32.201 "is_configured": true, 00:14:32.201 "data_offset": 0, 00:14:32.201 "data_size": 65536 00:14:32.201 }, 00:14:32.201 { 00:14:32.201 "name": "BaseBdev3", 00:14:32.201 "uuid": "9048c45c-ac7f-5844-90bf-4c7002bb8e3e", 00:14:32.201 "is_configured": true, 00:14:32.201 "data_offset": 0, 00:14:32.201 "data_size": 65536 00:14:32.201 }, 00:14:32.201 { 00:14:32.201 "name": "BaseBdev4", 00:14:32.201 "uuid": "47d7cb78-a685-5522-8383-3c448e5fa5ed", 00:14:32.201 "is_configured": true, 00:14:32.201 "data_offset": 0, 00:14:32.201 "data_size": 65536 00:14:32.201 } 00:14:32.201 ] 00:14:32.201 }' 00:14:32.201 05:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.201 05:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:32.201 05:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.201 05:03:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:32.201 05:03:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:33.580 05:03:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:33.580 05:03:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.580 05:03:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.580 05:03:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.580 05:03:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.580 05:03:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.580 05:03:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.580 05:03:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.580 05:03:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.580 05:03:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.580 05:03:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.580 05:03:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.580 "name": "raid_bdev1", 00:14:33.580 "uuid": "d40d3313-d521-483c-84bd-b10dbe025306", 00:14:33.580 "strip_size_kb": 64, 00:14:33.580 "state": "online", 00:14:33.580 "raid_level": "raid5f", 00:14:33.580 "superblock": false, 00:14:33.580 "num_base_bdevs": 4, 00:14:33.580 "num_base_bdevs_discovered": 4, 00:14:33.580 "num_base_bdevs_operational": 4, 00:14:33.580 "process": { 00:14:33.580 "type": "rebuild", 00:14:33.580 "target": "spare", 00:14:33.580 "progress": { 00:14:33.580 "blocks": 151680, 00:14:33.580 "percent": 77 00:14:33.580 } 00:14:33.580 }, 00:14:33.580 "base_bdevs_list": [ 00:14:33.580 { 00:14:33.580 "name": "spare", 00:14:33.580 "uuid": "57dff2bf-23a1-5bed-94f6-8a70a21a2949", 00:14:33.580 "is_configured": true, 00:14:33.580 "data_offset": 0, 00:14:33.580 "data_size": 65536 00:14:33.580 }, 00:14:33.580 { 00:14:33.580 "name": "BaseBdev2", 00:14:33.580 "uuid": "1023130c-73fd-599e-8160-d4e04518809b", 00:14:33.580 "is_configured": true, 00:14:33.580 "data_offset": 0, 00:14:33.580 "data_size": 65536 00:14:33.580 }, 00:14:33.580 { 00:14:33.580 "name": "BaseBdev3", 00:14:33.580 "uuid": "9048c45c-ac7f-5844-90bf-4c7002bb8e3e", 00:14:33.580 "is_configured": true, 00:14:33.580 "data_offset": 0, 00:14:33.580 "data_size": 65536 00:14:33.580 }, 00:14:33.580 { 00:14:33.580 "name": "BaseBdev4", 00:14:33.580 "uuid": "47d7cb78-a685-5522-8383-3c448e5fa5ed", 00:14:33.580 "is_configured": true, 00:14:33.580 "data_offset": 0, 00:14:33.580 "data_size": 65536 00:14:33.580 } 00:14:33.580 ] 00:14:33.580 }' 00:14:33.580 05:03:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.580 05:03:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.580 05:03:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.580 05:03:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.580 05:03:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:34.517 05:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:34.517 05:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.517 05:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.517 05:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.517 05:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.517 05:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.517 05:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.517 05:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.517 05:03:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.517 05:03:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.517 05:03:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.517 05:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.517 "name": "raid_bdev1", 00:14:34.517 "uuid": "d40d3313-d521-483c-84bd-b10dbe025306", 00:14:34.517 "strip_size_kb": 64, 00:14:34.517 "state": "online", 00:14:34.517 "raid_level": "raid5f", 00:14:34.517 "superblock": false, 00:14:34.517 "num_base_bdevs": 4, 00:14:34.517 "num_base_bdevs_discovered": 4, 00:14:34.517 "num_base_bdevs_operational": 4, 00:14:34.517 "process": { 00:14:34.517 "type": "rebuild", 00:14:34.517 "target": "spare", 00:14:34.517 "progress": { 00:14:34.517 "blocks": 174720, 00:14:34.517 "percent": 88 00:14:34.517 } 00:14:34.517 }, 00:14:34.517 "base_bdevs_list": [ 00:14:34.517 { 00:14:34.517 "name": "spare", 00:14:34.517 "uuid": "57dff2bf-23a1-5bed-94f6-8a70a21a2949", 00:14:34.517 "is_configured": true, 00:14:34.517 "data_offset": 0, 00:14:34.517 "data_size": 65536 00:14:34.517 }, 00:14:34.517 { 00:14:34.517 "name": "BaseBdev2", 00:14:34.517 "uuid": "1023130c-73fd-599e-8160-d4e04518809b", 00:14:34.517 "is_configured": true, 00:14:34.517 "data_offset": 0, 00:14:34.517 "data_size": 65536 00:14:34.517 }, 00:14:34.517 { 00:14:34.517 "name": "BaseBdev3", 00:14:34.517 "uuid": "9048c45c-ac7f-5844-90bf-4c7002bb8e3e", 00:14:34.517 "is_configured": true, 00:14:34.517 "data_offset": 0, 00:14:34.517 "data_size": 65536 00:14:34.517 }, 00:14:34.517 { 00:14:34.517 "name": "BaseBdev4", 00:14:34.517 "uuid": "47d7cb78-a685-5522-8383-3c448e5fa5ed", 00:14:34.517 "is_configured": true, 00:14:34.517 "data_offset": 0, 00:14:34.517 "data_size": 65536 00:14:34.517 } 00:14:34.517 ] 00:14:34.517 }' 00:14:34.517 05:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.517 05:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:34.518 05:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.518 05:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.518 05:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:35.455 05:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:35.455 05:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.455 05:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.455 05:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.455 05:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.455 05:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.455 05:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.455 05:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.455 05:03:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.455 05:03:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.715 05:03:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.715 05:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.715 "name": "raid_bdev1", 00:14:35.715 "uuid": "d40d3313-d521-483c-84bd-b10dbe025306", 00:14:35.715 "strip_size_kb": 64, 00:14:35.715 "state": "online", 00:14:35.715 "raid_level": "raid5f", 00:14:35.715 "superblock": false, 00:14:35.715 "num_base_bdevs": 4, 00:14:35.715 "num_base_bdevs_discovered": 4, 00:14:35.715 "num_base_bdevs_operational": 4, 00:14:35.715 "process": { 00:14:35.715 "type": "rebuild", 00:14:35.715 "target": "spare", 00:14:35.715 "progress": { 00:14:35.715 "blocks": 195840, 00:14:35.715 "percent": 99 00:14:35.715 } 00:14:35.715 }, 00:14:35.715 "base_bdevs_list": [ 00:14:35.715 { 00:14:35.715 "name": "spare", 00:14:35.715 "uuid": "57dff2bf-23a1-5bed-94f6-8a70a21a2949", 00:14:35.715 "is_configured": true, 00:14:35.715 "data_offset": 0, 00:14:35.715 "data_size": 65536 00:14:35.715 }, 00:14:35.715 { 00:14:35.715 "name": "BaseBdev2", 00:14:35.715 "uuid": "1023130c-73fd-599e-8160-d4e04518809b", 00:14:35.715 "is_configured": true, 00:14:35.715 "data_offset": 0, 00:14:35.715 "data_size": 65536 00:14:35.715 }, 00:14:35.715 { 00:14:35.715 "name": "BaseBdev3", 00:14:35.715 "uuid": "9048c45c-ac7f-5844-90bf-4c7002bb8e3e", 00:14:35.715 "is_configured": true, 00:14:35.715 "data_offset": 0, 00:14:35.715 "data_size": 65536 00:14:35.715 }, 00:14:35.715 { 00:14:35.715 "name": "BaseBdev4", 00:14:35.715 "uuid": "47d7cb78-a685-5522-8383-3c448e5fa5ed", 00:14:35.715 "is_configured": true, 00:14:35.715 "data_offset": 0, 00:14:35.715 "data_size": 65536 00:14:35.715 } 00:14:35.715 ] 00:14:35.715 }' 00:14:35.715 05:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.715 [2024-12-14 05:03:46.383825] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:35.715 [2024-12-14 05:03:46.383939] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:35.715 [2024-12-14 05:03:46.384001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.715 05:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.715 05:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.715 05:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.715 05:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:36.653 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:36.653 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.653 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.653 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.653 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.653 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.653 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.653 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.653 05:03:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.653 05:03:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.653 05:03:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.653 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.653 "name": "raid_bdev1", 00:14:36.653 "uuid": "d40d3313-d521-483c-84bd-b10dbe025306", 00:14:36.653 "strip_size_kb": 64, 00:14:36.653 "state": "online", 00:14:36.653 "raid_level": "raid5f", 00:14:36.653 "superblock": false, 00:14:36.653 "num_base_bdevs": 4, 00:14:36.653 "num_base_bdevs_discovered": 4, 00:14:36.653 "num_base_bdevs_operational": 4, 00:14:36.653 "base_bdevs_list": [ 00:14:36.653 { 00:14:36.653 "name": "spare", 00:14:36.653 "uuid": "57dff2bf-23a1-5bed-94f6-8a70a21a2949", 00:14:36.653 "is_configured": true, 00:14:36.653 "data_offset": 0, 00:14:36.653 "data_size": 65536 00:14:36.653 }, 00:14:36.653 { 00:14:36.653 "name": "BaseBdev2", 00:14:36.653 "uuid": "1023130c-73fd-599e-8160-d4e04518809b", 00:14:36.653 "is_configured": true, 00:14:36.653 "data_offset": 0, 00:14:36.653 "data_size": 65536 00:14:36.653 }, 00:14:36.653 { 00:14:36.653 "name": "BaseBdev3", 00:14:36.653 "uuid": "9048c45c-ac7f-5844-90bf-4c7002bb8e3e", 00:14:36.653 "is_configured": true, 00:14:36.653 "data_offset": 0, 00:14:36.653 "data_size": 65536 00:14:36.653 }, 00:14:36.653 { 00:14:36.653 "name": "BaseBdev4", 00:14:36.653 "uuid": "47d7cb78-a685-5522-8383-3c448e5fa5ed", 00:14:36.653 "is_configured": true, 00:14:36.653 "data_offset": 0, 00:14:36.653 "data_size": 65536 00:14:36.653 } 00:14:36.653 ] 00:14:36.653 }' 00:14:36.653 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.912 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:36.912 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.912 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:36.912 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:36.912 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:36.912 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.912 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:36.912 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:36.912 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.912 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.912 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.912 05:03:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.912 05:03:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.912 05:03:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.912 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.912 "name": "raid_bdev1", 00:14:36.912 "uuid": "d40d3313-d521-483c-84bd-b10dbe025306", 00:14:36.912 "strip_size_kb": 64, 00:14:36.912 "state": "online", 00:14:36.912 "raid_level": "raid5f", 00:14:36.912 "superblock": false, 00:14:36.912 "num_base_bdevs": 4, 00:14:36.912 "num_base_bdevs_discovered": 4, 00:14:36.912 "num_base_bdevs_operational": 4, 00:14:36.912 "base_bdevs_list": [ 00:14:36.912 { 00:14:36.912 "name": "spare", 00:14:36.912 "uuid": "57dff2bf-23a1-5bed-94f6-8a70a21a2949", 00:14:36.912 "is_configured": true, 00:14:36.912 "data_offset": 0, 00:14:36.912 "data_size": 65536 00:14:36.912 }, 00:14:36.912 { 00:14:36.912 "name": "BaseBdev2", 00:14:36.912 "uuid": "1023130c-73fd-599e-8160-d4e04518809b", 00:14:36.912 "is_configured": true, 00:14:36.912 "data_offset": 0, 00:14:36.912 "data_size": 65536 00:14:36.912 }, 00:14:36.912 { 00:14:36.912 "name": "BaseBdev3", 00:14:36.912 "uuid": "9048c45c-ac7f-5844-90bf-4c7002bb8e3e", 00:14:36.912 "is_configured": true, 00:14:36.912 "data_offset": 0, 00:14:36.912 "data_size": 65536 00:14:36.912 }, 00:14:36.912 { 00:14:36.912 "name": "BaseBdev4", 00:14:36.912 "uuid": "47d7cb78-a685-5522-8383-3c448e5fa5ed", 00:14:36.912 "is_configured": true, 00:14:36.912 "data_offset": 0, 00:14:36.912 "data_size": 65536 00:14:36.912 } 00:14:36.912 ] 00:14:36.912 }' 00:14:36.912 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.912 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:36.912 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.912 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:36.912 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:36.912 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.912 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.913 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.913 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.913 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.913 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.913 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.913 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.913 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.913 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.913 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.913 05:03:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.913 05:03:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.913 05:03:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.913 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.913 "name": "raid_bdev1", 00:14:36.913 "uuid": "d40d3313-d521-483c-84bd-b10dbe025306", 00:14:36.913 "strip_size_kb": 64, 00:14:36.913 "state": "online", 00:14:36.913 "raid_level": "raid5f", 00:14:36.913 "superblock": false, 00:14:36.913 "num_base_bdevs": 4, 00:14:36.913 "num_base_bdevs_discovered": 4, 00:14:36.913 "num_base_bdevs_operational": 4, 00:14:36.913 "base_bdevs_list": [ 00:14:36.913 { 00:14:36.913 "name": "spare", 00:14:36.913 "uuid": "57dff2bf-23a1-5bed-94f6-8a70a21a2949", 00:14:36.913 "is_configured": true, 00:14:36.913 "data_offset": 0, 00:14:36.913 "data_size": 65536 00:14:36.913 }, 00:14:36.913 { 00:14:36.913 "name": "BaseBdev2", 00:14:36.913 "uuid": "1023130c-73fd-599e-8160-d4e04518809b", 00:14:36.913 "is_configured": true, 00:14:36.913 "data_offset": 0, 00:14:36.913 "data_size": 65536 00:14:36.913 }, 00:14:36.913 { 00:14:36.913 "name": "BaseBdev3", 00:14:36.913 "uuid": "9048c45c-ac7f-5844-90bf-4c7002bb8e3e", 00:14:36.913 "is_configured": true, 00:14:36.913 "data_offset": 0, 00:14:36.913 "data_size": 65536 00:14:36.913 }, 00:14:36.913 { 00:14:36.913 "name": "BaseBdev4", 00:14:36.913 "uuid": "47d7cb78-a685-5522-8383-3c448e5fa5ed", 00:14:36.913 "is_configured": true, 00:14:36.913 "data_offset": 0, 00:14:36.913 "data_size": 65536 00:14:36.913 } 00:14:36.913 ] 00:14:36.913 }' 00:14:36.913 05:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.913 05:03:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.481 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:37.481 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.482 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.482 [2024-12-14 05:03:48.150039] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:37.482 [2024-12-14 05:03:48.150117] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.482 [2024-12-14 05:03:48.150232] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.482 [2024-12-14 05:03:48.150333] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.482 [2024-12-14 05:03:48.150384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:37.482 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.482 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.482 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:37.482 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.482 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.482 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.482 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:37.482 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:37.482 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:37.482 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:37.482 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:37.482 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:37.482 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:37.482 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:37.482 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:37.482 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:37.482 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:37.482 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:37.482 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:37.741 /dev/nbd0 00:14:37.741 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:37.741 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:37.741 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:37.741 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:37.741 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:37.741 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:37.741 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:37.741 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:37.741 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:37.741 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:37.741 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:37.741 1+0 records in 00:14:37.741 1+0 records out 00:14:37.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573587 s, 7.1 MB/s 00:14:37.741 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.741 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:37.741 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.741 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:37.741 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:37.741 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:37.741 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:37.741 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:38.001 /dev/nbd1 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:38.001 1+0 records in 00:14:38.001 1+0 records out 00:14:38.001 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605089 s, 6.8 MB/s 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:38.001 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:38.261 05:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:38.261 05:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:38.261 05:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:38.261 05:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:38.261 05:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:38.261 05:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:38.261 05:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:38.261 05:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:38.261 05:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:38.261 05:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:38.520 05:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:38.521 05:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:38.521 05:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:38.521 05:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:38.521 05:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:38.521 05:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:38.521 05:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:38.521 05:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:38.521 05:03:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:38.521 05:03:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 95031 00:14:38.521 05:03:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 95031 ']' 00:14:38.521 05:03:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 95031 00:14:38.521 05:03:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:14:38.521 05:03:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:38.521 05:03:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95031 00:14:38.521 05:03:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:38.521 05:03:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:38.521 05:03:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95031' 00:14:38.521 killing process with pid 95031 00:14:38.521 05:03:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 95031 00:14:38.521 Received shutdown signal, test time was about 60.000000 seconds 00:14:38.521 00:14:38.521 Latency(us) 00:14:38.521 [2024-12-14T05:03:49.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.521 [2024-12-14T05:03:49.404Z] =================================================================================================================== 00:14:38.521 [2024-12-14T05:03:49.404Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:38.521 [2024-12-14 05:03:49.262365] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:38.521 05:03:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 95031 00:14:38.521 [2024-12-14 05:03:49.312231] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:38.781 ************************************ 00:14:38.781 END TEST raid5f_rebuild_test 00:14:38.781 ************************************ 00:14:38.781 05:03:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:38.781 00:14:38.781 real 0m18.343s 00:14:38.781 user 0m22.081s 00:14:38.781 sys 0m2.345s 00:14:38.781 05:03:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:38.781 05:03:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.781 05:03:49 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:14:38.781 05:03:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:38.781 05:03:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:38.781 05:03:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:38.781 ************************************ 00:14:38.781 START TEST raid5f_rebuild_test_sb 00:14:38.781 ************************************ 00:14:38.781 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:14:38.781 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:38.781 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=95540 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 95540 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 95540 ']' 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:38.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:38.782 05:03:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.041 [2024-12-14 05:03:49.726504] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:39.041 [2024-12-14 05:03:49.726715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95540 ] 00:14:39.041 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:39.041 Zero copy mechanism will not be used. 00:14:39.041 [2024-12-14 05:03:49.887205] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.301 [2024-12-14 05:03:49.935080] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.301 [2024-12-14 05:03:49.977913] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:39.301 [2024-12-14 05:03:49.978026] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.870 BaseBdev1_malloc 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.870 [2024-12-14 05:03:50.556625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:39.870 [2024-12-14 05:03:50.556788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.870 [2024-12-14 05:03:50.556845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:39.870 [2024-12-14 05:03:50.556883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.870 [2024-12-14 05:03:50.558904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.870 [2024-12-14 05:03:50.558982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:39.870 BaseBdev1 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.870 BaseBdev2_malloc 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.870 [2024-12-14 05:03:50.600088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:39.870 [2024-12-14 05:03:50.600299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.870 [2024-12-14 05:03:50.600387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:39.870 [2024-12-14 05:03:50.600461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.870 [2024-12-14 05:03:50.604939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.870 [2024-12-14 05:03:50.605079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:39.870 BaseBdev2 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.870 BaseBdev3_malloc 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.870 [2024-12-14 05:03:50.631221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:39.870 [2024-12-14 05:03:50.631344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.870 [2024-12-14 05:03:50.631373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:39.870 [2024-12-14 05:03:50.631382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.870 [2024-12-14 05:03:50.633402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.870 [2024-12-14 05:03:50.633436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:39.870 BaseBdev3 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.870 BaseBdev4_malloc 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.870 [2024-12-14 05:03:50.660023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:39.870 [2024-12-14 05:03:50.660136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.870 [2024-12-14 05:03:50.660191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:39.870 [2024-12-14 05:03:50.660221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.870 [2024-12-14 05:03:50.662213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.870 [2024-12-14 05:03:50.662278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:39.870 BaseBdev4 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.870 spare_malloc 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.870 spare_delay 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.870 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.870 [2024-12-14 05:03:50.700611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:39.870 [2024-12-14 05:03:50.700719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.870 [2024-12-14 05:03:50.700743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:39.870 [2024-12-14 05:03:50.700751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.871 [2024-12-14 05:03:50.702644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.871 [2024-12-14 05:03:50.702679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:39.871 spare 00:14:39.871 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.871 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:39.871 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.871 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.871 [2024-12-14 05:03:50.712682] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:39.871 [2024-12-14 05:03:50.714359] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:39.871 [2024-12-14 05:03:50.714460] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:39.871 [2024-12-14 05:03:50.714513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:39.871 [2024-12-14 05:03:50.714700] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:39.871 [2024-12-14 05:03:50.714743] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:39.871 [2024-12-14 05:03:50.714975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:39.871 [2024-12-14 05:03:50.715462] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:39.871 [2024-12-14 05:03:50.715513] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:39.871 [2024-12-14 05:03:50.715668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.871 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.871 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:39.871 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.871 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.871 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.871 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.871 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:39.871 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.871 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.871 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.871 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.871 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.871 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.871 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.871 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.871 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.130 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.130 "name": "raid_bdev1", 00:14:40.130 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:40.130 "strip_size_kb": 64, 00:14:40.130 "state": "online", 00:14:40.130 "raid_level": "raid5f", 00:14:40.130 "superblock": true, 00:14:40.130 "num_base_bdevs": 4, 00:14:40.130 "num_base_bdevs_discovered": 4, 00:14:40.130 "num_base_bdevs_operational": 4, 00:14:40.130 "base_bdevs_list": [ 00:14:40.130 { 00:14:40.130 "name": "BaseBdev1", 00:14:40.130 "uuid": "95714a49-f39c-59eb-9e58-45426a052879", 00:14:40.130 "is_configured": true, 00:14:40.130 "data_offset": 2048, 00:14:40.130 "data_size": 63488 00:14:40.130 }, 00:14:40.130 { 00:14:40.130 "name": "BaseBdev2", 00:14:40.130 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:40.130 "is_configured": true, 00:14:40.130 "data_offset": 2048, 00:14:40.130 "data_size": 63488 00:14:40.130 }, 00:14:40.130 { 00:14:40.130 "name": "BaseBdev3", 00:14:40.130 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:40.130 "is_configured": true, 00:14:40.130 "data_offset": 2048, 00:14:40.130 "data_size": 63488 00:14:40.130 }, 00:14:40.130 { 00:14:40.130 "name": "BaseBdev4", 00:14:40.130 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:40.130 "is_configured": true, 00:14:40.130 "data_offset": 2048, 00:14:40.130 "data_size": 63488 00:14:40.130 } 00:14:40.130 ] 00:14:40.130 }' 00:14:40.130 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.130 05:03:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.389 [2024-12-14 05:03:51.172763] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:40.389 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:40.647 [2024-12-14 05:03:51.444220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:40.647 /dev/nbd0 00:14:40.647 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:40.647 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:40.647 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:40.647 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:40.647 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:40.647 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:40.647 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:40.647 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:40.647 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:40.647 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:40.647 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:40.647 1+0 records in 00:14:40.647 1+0 records out 00:14:40.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262926 s, 15.6 MB/s 00:14:40.647 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.647 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:40.647 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.647 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:40.647 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:40.647 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:40.647 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:40.647 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:40.647 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:40.647 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:40.648 05:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:14:41.216 496+0 records in 00:14:41.216 496+0 records out 00:14:41.216 97517568 bytes (98 MB, 93 MiB) copied, 0.530635 s, 184 MB/s 00:14:41.216 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:41.216 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:41.216 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:41.216 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:41.216 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:41.216 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:41.216 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:41.476 [2024-12-14 05:03:52.265447] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.476 [2024-12-14 05:03:52.277493] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.476 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.477 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.477 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.477 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.477 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.477 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.477 "name": "raid_bdev1", 00:14:41.477 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:41.477 "strip_size_kb": 64, 00:14:41.477 "state": "online", 00:14:41.477 "raid_level": "raid5f", 00:14:41.477 "superblock": true, 00:14:41.477 "num_base_bdevs": 4, 00:14:41.477 "num_base_bdevs_discovered": 3, 00:14:41.477 "num_base_bdevs_operational": 3, 00:14:41.477 "base_bdevs_list": [ 00:14:41.477 { 00:14:41.477 "name": null, 00:14:41.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.477 "is_configured": false, 00:14:41.477 "data_offset": 0, 00:14:41.477 "data_size": 63488 00:14:41.477 }, 00:14:41.477 { 00:14:41.477 "name": "BaseBdev2", 00:14:41.477 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:41.477 "is_configured": true, 00:14:41.477 "data_offset": 2048, 00:14:41.477 "data_size": 63488 00:14:41.477 }, 00:14:41.477 { 00:14:41.477 "name": "BaseBdev3", 00:14:41.477 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:41.477 "is_configured": true, 00:14:41.477 "data_offset": 2048, 00:14:41.477 "data_size": 63488 00:14:41.477 }, 00:14:41.477 { 00:14:41.477 "name": "BaseBdev4", 00:14:41.477 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:41.477 "is_configured": true, 00:14:41.477 "data_offset": 2048, 00:14:41.477 "data_size": 63488 00:14:41.477 } 00:14:41.477 ] 00:14:41.477 }' 00:14:41.477 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.477 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.057 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:42.057 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.057 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.057 [2024-12-14 05:03:52.756700] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:42.057 [2024-12-14 05:03:52.760091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:14:42.057 [2024-12-14 05:03:52.762222] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:42.057 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.057 05:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:43.040 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.040 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.040 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.040 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.040 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.040 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.040 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.040 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.040 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.040 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.040 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.040 "name": "raid_bdev1", 00:14:43.040 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:43.040 "strip_size_kb": 64, 00:14:43.040 "state": "online", 00:14:43.040 "raid_level": "raid5f", 00:14:43.040 "superblock": true, 00:14:43.040 "num_base_bdevs": 4, 00:14:43.040 "num_base_bdevs_discovered": 4, 00:14:43.040 "num_base_bdevs_operational": 4, 00:14:43.040 "process": { 00:14:43.040 "type": "rebuild", 00:14:43.040 "target": "spare", 00:14:43.040 "progress": { 00:14:43.040 "blocks": 19200, 00:14:43.040 "percent": 10 00:14:43.040 } 00:14:43.040 }, 00:14:43.040 "base_bdevs_list": [ 00:14:43.040 { 00:14:43.040 "name": "spare", 00:14:43.040 "uuid": "bc267183-b338-502a-b407-f3c4e79e07d4", 00:14:43.040 "is_configured": true, 00:14:43.040 "data_offset": 2048, 00:14:43.040 "data_size": 63488 00:14:43.040 }, 00:14:43.040 { 00:14:43.040 "name": "BaseBdev2", 00:14:43.040 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:43.040 "is_configured": true, 00:14:43.040 "data_offset": 2048, 00:14:43.040 "data_size": 63488 00:14:43.040 }, 00:14:43.040 { 00:14:43.040 "name": "BaseBdev3", 00:14:43.040 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:43.040 "is_configured": true, 00:14:43.040 "data_offset": 2048, 00:14:43.040 "data_size": 63488 00:14:43.040 }, 00:14:43.040 { 00:14:43.040 "name": "BaseBdev4", 00:14:43.040 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:43.040 "is_configured": true, 00:14:43.040 "data_offset": 2048, 00:14:43.040 "data_size": 63488 00:14:43.040 } 00:14:43.040 ] 00:14:43.040 }' 00:14:43.040 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.040 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.040 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.040 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.040 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:43.040 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.040 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.040 [2024-12-14 05:03:53.904714] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:43.299 [2024-12-14 05:03:53.967463] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:43.299 [2024-12-14 05:03:53.967557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.299 [2024-12-14 05:03:53.967578] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:43.299 [2024-12-14 05:03:53.967588] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:43.299 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.299 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:43.299 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.299 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.299 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.299 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.299 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.299 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.299 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.299 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.299 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.299 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.299 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.299 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.299 05:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.300 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.300 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.300 "name": "raid_bdev1", 00:14:43.300 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:43.300 "strip_size_kb": 64, 00:14:43.300 "state": "online", 00:14:43.300 "raid_level": "raid5f", 00:14:43.300 "superblock": true, 00:14:43.300 "num_base_bdevs": 4, 00:14:43.300 "num_base_bdevs_discovered": 3, 00:14:43.300 "num_base_bdevs_operational": 3, 00:14:43.300 "base_bdevs_list": [ 00:14:43.300 { 00:14:43.300 "name": null, 00:14:43.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.300 "is_configured": false, 00:14:43.300 "data_offset": 0, 00:14:43.300 "data_size": 63488 00:14:43.300 }, 00:14:43.300 { 00:14:43.300 "name": "BaseBdev2", 00:14:43.300 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:43.300 "is_configured": true, 00:14:43.300 "data_offset": 2048, 00:14:43.300 "data_size": 63488 00:14:43.300 }, 00:14:43.300 { 00:14:43.300 "name": "BaseBdev3", 00:14:43.300 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:43.300 "is_configured": true, 00:14:43.300 "data_offset": 2048, 00:14:43.300 "data_size": 63488 00:14:43.300 }, 00:14:43.300 { 00:14:43.300 "name": "BaseBdev4", 00:14:43.300 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:43.300 "is_configured": true, 00:14:43.300 "data_offset": 2048, 00:14:43.300 "data_size": 63488 00:14:43.300 } 00:14:43.300 ] 00:14:43.300 }' 00:14:43.300 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.300 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.559 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:43.559 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.559 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:43.559 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:43.559 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.559 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.559 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.559 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.559 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.559 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.818 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.818 "name": "raid_bdev1", 00:14:43.818 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:43.818 "strip_size_kb": 64, 00:14:43.818 "state": "online", 00:14:43.818 "raid_level": "raid5f", 00:14:43.818 "superblock": true, 00:14:43.818 "num_base_bdevs": 4, 00:14:43.818 "num_base_bdevs_discovered": 3, 00:14:43.818 "num_base_bdevs_operational": 3, 00:14:43.818 "base_bdevs_list": [ 00:14:43.818 { 00:14:43.818 "name": null, 00:14:43.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.818 "is_configured": false, 00:14:43.818 "data_offset": 0, 00:14:43.818 "data_size": 63488 00:14:43.818 }, 00:14:43.818 { 00:14:43.818 "name": "BaseBdev2", 00:14:43.818 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:43.818 "is_configured": true, 00:14:43.818 "data_offset": 2048, 00:14:43.818 "data_size": 63488 00:14:43.818 }, 00:14:43.818 { 00:14:43.818 "name": "BaseBdev3", 00:14:43.818 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:43.818 "is_configured": true, 00:14:43.818 "data_offset": 2048, 00:14:43.818 "data_size": 63488 00:14:43.818 }, 00:14:43.818 { 00:14:43.818 "name": "BaseBdev4", 00:14:43.818 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:43.818 "is_configured": true, 00:14:43.818 "data_offset": 2048, 00:14:43.818 "data_size": 63488 00:14:43.818 } 00:14:43.818 ] 00:14:43.818 }' 00:14:43.818 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.818 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:43.818 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.819 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:43.819 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:43.819 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.819 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.819 [2024-12-14 05:03:54.563812] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:43.819 [2024-12-14 05:03:54.566565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a980 00:14:43.819 [2024-12-14 05:03:54.568759] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:43.819 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.819 05:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:44.764 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.764 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.764 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.764 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.764 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.764 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.764 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.764 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.764 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.764 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.764 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.764 "name": "raid_bdev1", 00:14:44.764 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:44.764 "strip_size_kb": 64, 00:14:44.764 "state": "online", 00:14:44.764 "raid_level": "raid5f", 00:14:44.764 "superblock": true, 00:14:44.764 "num_base_bdevs": 4, 00:14:44.764 "num_base_bdevs_discovered": 4, 00:14:44.764 "num_base_bdevs_operational": 4, 00:14:44.764 "process": { 00:14:44.764 "type": "rebuild", 00:14:44.764 "target": "spare", 00:14:44.764 "progress": { 00:14:44.764 "blocks": 19200, 00:14:44.764 "percent": 10 00:14:44.764 } 00:14:44.764 }, 00:14:44.764 "base_bdevs_list": [ 00:14:44.764 { 00:14:44.764 "name": "spare", 00:14:44.764 "uuid": "bc267183-b338-502a-b407-f3c4e79e07d4", 00:14:44.764 "is_configured": true, 00:14:44.764 "data_offset": 2048, 00:14:44.764 "data_size": 63488 00:14:44.764 }, 00:14:44.764 { 00:14:44.764 "name": "BaseBdev2", 00:14:44.764 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:44.764 "is_configured": true, 00:14:44.764 "data_offset": 2048, 00:14:44.764 "data_size": 63488 00:14:44.764 }, 00:14:44.764 { 00:14:44.764 "name": "BaseBdev3", 00:14:44.764 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:44.764 "is_configured": true, 00:14:44.764 "data_offset": 2048, 00:14:44.764 "data_size": 63488 00:14:44.764 }, 00:14:44.764 { 00:14:44.764 "name": "BaseBdev4", 00:14:44.764 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:44.764 "is_configured": true, 00:14:44.764 "data_offset": 2048, 00:14:44.764 "data_size": 63488 00:14:44.764 } 00:14:44.764 ] 00:14:44.764 }' 00:14:44.764 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:45.024 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=526 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.024 "name": "raid_bdev1", 00:14:45.024 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:45.024 "strip_size_kb": 64, 00:14:45.024 "state": "online", 00:14:45.024 "raid_level": "raid5f", 00:14:45.024 "superblock": true, 00:14:45.024 "num_base_bdevs": 4, 00:14:45.024 "num_base_bdevs_discovered": 4, 00:14:45.024 "num_base_bdevs_operational": 4, 00:14:45.024 "process": { 00:14:45.024 "type": "rebuild", 00:14:45.024 "target": "spare", 00:14:45.024 "progress": { 00:14:45.024 "blocks": 21120, 00:14:45.024 "percent": 11 00:14:45.024 } 00:14:45.024 }, 00:14:45.024 "base_bdevs_list": [ 00:14:45.024 { 00:14:45.024 "name": "spare", 00:14:45.024 "uuid": "bc267183-b338-502a-b407-f3c4e79e07d4", 00:14:45.024 "is_configured": true, 00:14:45.024 "data_offset": 2048, 00:14:45.024 "data_size": 63488 00:14:45.024 }, 00:14:45.024 { 00:14:45.024 "name": "BaseBdev2", 00:14:45.024 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:45.024 "is_configured": true, 00:14:45.024 "data_offset": 2048, 00:14:45.024 "data_size": 63488 00:14:45.024 }, 00:14:45.024 { 00:14:45.024 "name": "BaseBdev3", 00:14:45.024 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:45.024 "is_configured": true, 00:14:45.024 "data_offset": 2048, 00:14:45.024 "data_size": 63488 00:14:45.024 }, 00:14:45.024 { 00:14:45.024 "name": "BaseBdev4", 00:14:45.024 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:45.024 "is_configured": true, 00:14:45.024 "data_offset": 2048, 00:14:45.024 "data_size": 63488 00:14:45.024 } 00:14:45.024 ] 00:14:45.024 }' 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.024 05:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:46.402 05:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:46.402 05:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.402 05:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.402 05:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.402 05:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.402 05:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.402 05:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.402 05:03:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.402 05:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.402 05:03:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.402 05:03:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.402 05:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.402 "name": "raid_bdev1", 00:14:46.402 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:46.402 "strip_size_kb": 64, 00:14:46.402 "state": "online", 00:14:46.402 "raid_level": "raid5f", 00:14:46.402 "superblock": true, 00:14:46.402 "num_base_bdevs": 4, 00:14:46.402 "num_base_bdevs_discovered": 4, 00:14:46.402 "num_base_bdevs_operational": 4, 00:14:46.402 "process": { 00:14:46.402 "type": "rebuild", 00:14:46.402 "target": "spare", 00:14:46.402 "progress": { 00:14:46.402 "blocks": 44160, 00:14:46.402 "percent": 23 00:14:46.402 } 00:14:46.402 }, 00:14:46.402 "base_bdevs_list": [ 00:14:46.402 { 00:14:46.402 "name": "spare", 00:14:46.402 "uuid": "bc267183-b338-502a-b407-f3c4e79e07d4", 00:14:46.403 "is_configured": true, 00:14:46.403 "data_offset": 2048, 00:14:46.403 "data_size": 63488 00:14:46.403 }, 00:14:46.403 { 00:14:46.403 "name": "BaseBdev2", 00:14:46.403 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:46.403 "is_configured": true, 00:14:46.403 "data_offset": 2048, 00:14:46.403 "data_size": 63488 00:14:46.403 }, 00:14:46.403 { 00:14:46.403 "name": "BaseBdev3", 00:14:46.403 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:46.403 "is_configured": true, 00:14:46.403 "data_offset": 2048, 00:14:46.403 "data_size": 63488 00:14:46.403 }, 00:14:46.403 { 00:14:46.403 "name": "BaseBdev4", 00:14:46.403 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:46.403 "is_configured": true, 00:14:46.403 "data_offset": 2048, 00:14:46.403 "data_size": 63488 00:14:46.403 } 00:14:46.403 ] 00:14:46.403 }' 00:14:46.403 05:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.403 05:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.403 05:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.403 05:03:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.403 05:03:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:47.340 05:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:47.340 05:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.340 05:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.340 05:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.340 05:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.340 05:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.340 05:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.340 05:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.340 05:03:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.340 05:03:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.340 05:03:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.340 05:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.340 "name": "raid_bdev1", 00:14:47.340 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:47.341 "strip_size_kb": 64, 00:14:47.341 "state": "online", 00:14:47.341 "raid_level": "raid5f", 00:14:47.341 "superblock": true, 00:14:47.341 "num_base_bdevs": 4, 00:14:47.341 "num_base_bdevs_discovered": 4, 00:14:47.341 "num_base_bdevs_operational": 4, 00:14:47.341 "process": { 00:14:47.341 "type": "rebuild", 00:14:47.341 "target": "spare", 00:14:47.341 "progress": { 00:14:47.341 "blocks": 65280, 00:14:47.341 "percent": 34 00:14:47.341 } 00:14:47.341 }, 00:14:47.341 "base_bdevs_list": [ 00:14:47.341 { 00:14:47.341 "name": "spare", 00:14:47.341 "uuid": "bc267183-b338-502a-b407-f3c4e79e07d4", 00:14:47.341 "is_configured": true, 00:14:47.341 "data_offset": 2048, 00:14:47.341 "data_size": 63488 00:14:47.341 }, 00:14:47.341 { 00:14:47.341 "name": "BaseBdev2", 00:14:47.341 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:47.341 "is_configured": true, 00:14:47.341 "data_offset": 2048, 00:14:47.341 "data_size": 63488 00:14:47.341 }, 00:14:47.341 { 00:14:47.341 "name": "BaseBdev3", 00:14:47.341 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:47.341 "is_configured": true, 00:14:47.341 "data_offset": 2048, 00:14:47.341 "data_size": 63488 00:14:47.341 }, 00:14:47.341 { 00:14:47.341 "name": "BaseBdev4", 00:14:47.341 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:47.341 "is_configured": true, 00:14:47.341 "data_offset": 2048, 00:14:47.341 "data_size": 63488 00:14:47.341 } 00:14:47.341 ] 00:14:47.341 }' 00:14:47.341 05:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.341 05:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.341 05:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.341 05:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.341 05:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:48.278 05:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:48.278 05:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.278 05:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.278 05:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.278 05:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.278 05:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.537 05:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.537 05:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.537 05:03:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.537 05:03:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.537 05:03:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.537 05:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.537 "name": "raid_bdev1", 00:14:48.537 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:48.537 "strip_size_kb": 64, 00:14:48.537 "state": "online", 00:14:48.537 "raid_level": "raid5f", 00:14:48.537 "superblock": true, 00:14:48.537 "num_base_bdevs": 4, 00:14:48.537 "num_base_bdevs_discovered": 4, 00:14:48.537 "num_base_bdevs_operational": 4, 00:14:48.537 "process": { 00:14:48.537 "type": "rebuild", 00:14:48.537 "target": "spare", 00:14:48.537 "progress": { 00:14:48.537 "blocks": 86400, 00:14:48.537 "percent": 45 00:14:48.537 } 00:14:48.537 }, 00:14:48.537 "base_bdevs_list": [ 00:14:48.537 { 00:14:48.537 "name": "spare", 00:14:48.537 "uuid": "bc267183-b338-502a-b407-f3c4e79e07d4", 00:14:48.537 "is_configured": true, 00:14:48.537 "data_offset": 2048, 00:14:48.537 "data_size": 63488 00:14:48.537 }, 00:14:48.537 { 00:14:48.537 "name": "BaseBdev2", 00:14:48.537 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:48.537 "is_configured": true, 00:14:48.537 "data_offset": 2048, 00:14:48.537 "data_size": 63488 00:14:48.537 }, 00:14:48.537 { 00:14:48.537 "name": "BaseBdev3", 00:14:48.537 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:48.538 "is_configured": true, 00:14:48.538 "data_offset": 2048, 00:14:48.538 "data_size": 63488 00:14:48.538 }, 00:14:48.538 { 00:14:48.538 "name": "BaseBdev4", 00:14:48.538 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:48.538 "is_configured": true, 00:14:48.538 "data_offset": 2048, 00:14:48.538 "data_size": 63488 00:14:48.538 } 00:14:48.538 ] 00:14:48.538 }' 00:14:48.538 05:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.538 05:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.538 05:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.538 05:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.538 05:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:49.475 05:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:49.475 05:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.475 05:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.475 05:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.475 05:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.475 05:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.475 05:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.475 05:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.475 05:04:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.475 05:04:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.475 05:04:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.475 05:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.475 "name": "raid_bdev1", 00:14:49.475 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:49.475 "strip_size_kb": 64, 00:14:49.475 "state": "online", 00:14:49.475 "raid_level": "raid5f", 00:14:49.475 "superblock": true, 00:14:49.475 "num_base_bdevs": 4, 00:14:49.475 "num_base_bdevs_discovered": 4, 00:14:49.475 "num_base_bdevs_operational": 4, 00:14:49.475 "process": { 00:14:49.475 "type": "rebuild", 00:14:49.475 "target": "spare", 00:14:49.475 "progress": { 00:14:49.475 "blocks": 109440, 00:14:49.475 "percent": 57 00:14:49.475 } 00:14:49.475 }, 00:14:49.475 "base_bdevs_list": [ 00:14:49.475 { 00:14:49.475 "name": "spare", 00:14:49.475 "uuid": "bc267183-b338-502a-b407-f3c4e79e07d4", 00:14:49.475 "is_configured": true, 00:14:49.476 "data_offset": 2048, 00:14:49.476 "data_size": 63488 00:14:49.476 }, 00:14:49.476 { 00:14:49.476 "name": "BaseBdev2", 00:14:49.476 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:49.476 "is_configured": true, 00:14:49.476 "data_offset": 2048, 00:14:49.476 "data_size": 63488 00:14:49.476 }, 00:14:49.476 { 00:14:49.476 "name": "BaseBdev3", 00:14:49.476 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:49.476 "is_configured": true, 00:14:49.476 "data_offset": 2048, 00:14:49.476 "data_size": 63488 00:14:49.476 }, 00:14:49.476 { 00:14:49.476 "name": "BaseBdev4", 00:14:49.476 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:49.476 "is_configured": true, 00:14:49.476 "data_offset": 2048, 00:14:49.476 "data_size": 63488 00:14:49.476 } 00:14:49.476 ] 00:14:49.476 }' 00:14:49.476 05:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.735 05:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.735 05:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.735 05:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.735 05:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:50.671 05:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:50.671 05:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.671 05:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.671 05:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.671 05:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.671 05:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.671 05:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.671 05:04:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.671 05:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.671 05:04:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.671 05:04:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.671 05:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.671 "name": "raid_bdev1", 00:14:50.671 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:50.671 "strip_size_kb": 64, 00:14:50.671 "state": "online", 00:14:50.671 "raid_level": "raid5f", 00:14:50.671 "superblock": true, 00:14:50.671 "num_base_bdevs": 4, 00:14:50.671 "num_base_bdevs_discovered": 4, 00:14:50.671 "num_base_bdevs_operational": 4, 00:14:50.671 "process": { 00:14:50.671 "type": "rebuild", 00:14:50.671 "target": "spare", 00:14:50.671 "progress": { 00:14:50.671 "blocks": 130560, 00:14:50.671 "percent": 68 00:14:50.671 } 00:14:50.671 }, 00:14:50.671 "base_bdevs_list": [ 00:14:50.671 { 00:14:50.671 "name": "spare", 00:14:50.671 "uuid": "bc267183-b338-502a-b407-f3c4e79e07d4", 00:14:50.671 "is_configured": true, 00:14:50.671 "data_offset": 2048, 00:14:50.671 "data_size": 63488 00:14:50.671 }, 00:14:50.671 { 00:14:50.671 "name": "BaseBdev2", 00:14:50.671 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:50.671 "is_configured": true, 00:14:50.671 "data_offset": 2048, 00:14:50.671 "data_size": 63488 00:14:50.671 }, 00:14:50.671 { 00:14:50.671 "name": "BaseBdev3", 00:14:50.671 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:50.671 "is_configured": true, 00:14:50.671 "data_offset": 2048, 00:14:50.671 "data_size": 63488 00:14:50.671 }, 00:14:50.671 { 00:14:50.671 "name": "BaseBdev4", 00:14:50.671 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:50.671 "is_configured": true, 00:14:50.671 "data_offset": 2048, 00:14:50.671 "data_size": 63488 00:14:50.671 } 00:14:50.671 ] 00:14:50.671 }' 00:14:50.671 05:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.671 05:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.671 05:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.929 05:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.929 05:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:51.867 05:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:51.867 05:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.867 05:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.867 05:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.867 05:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.867 05:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.867 05:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.867 05:04:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.867 05:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.867 05:04:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.867 05:04:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.867 05:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.867 "name": "raid_bdev1", 00:14:51.867 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:51.867 "strip_size_kb": 64, 00:14:51.867 "state": "online", 00:14:51.867 "raid_level": "raid5f", 00:14:51.867 "superblock": true, 00:14:51.867 "num_base_bdevs": 4, 00:14:51.867 "num_base_bdevs_discovered": 4, 00:14:51.867 "num_base_bdevs_operational": 4, 00:14:51.867 "process": { 00:14:51.867 "type": "rebuild", 00:14:51.867 "target": "spare", 00:14:51.867 "progress": { 00:14:51.867 "blocks": 153600, 00:14:51.867 "percent": 80 00:14:51.867 } 00:14:51.867 }, 00:14:51.867 "base_bdevs_list": [ 00:14:51.867 { 00:14:51.867 "name": "spare", 00:14:51.867 "uuid": "bc267183-b338-502a-b407-f3c4e79e07d4", 00:14:51.867 "is_configured": true, 00:14:51.867 "data_offset": 2048, 00:14:51.867 "data_size": 63488 00:14:51.867 }, 00:14:51.867 { 00:14:51.867 "name": "BaseBdev2", 00:14:51.867 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:51.867 "is_configured": true, 00:14:51.867 "data_offset": 2048, 00:14:51.867 "data_size": 63488 00:14:51.867 }, 00:14:51.867 { 00:14:51.867 "name": "BaseBdev3", 00:14:51.867 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:51.867 "is_configured": true, 00:14:51.867 "data_offset": 2048, 00:14:51.867 "data_size": 63488 00:14:51.867 }, 00:14:51.867 { 00:14:51.867 "name": "BaseBdev4", 00:14:51.867 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:51.867 "is_configured": true, 00:14:51.867 "data_offset": 2048, 00:14:51.867 "data_size": 63488 00:14:51.867 } 00:14:51.867 ] 00:14:51.867 }' 00:14:51.867 05:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.867 05:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.867 05:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.126 05:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.126 05:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:53.064 05:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:53.064 05:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.064 05:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.064 05:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.064 05:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.064 05:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.064 05:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.064 05:04:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.064 05:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.064 05:04:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.064 05:04:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.064 05:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.064 "name": "raid_bdev1", 00:14:53.064 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:53.064 "strip_size_kb": 64, 00:14:53.064 "state": "online", 00:14:53.064 "raid_level": "raid5f", 00:14:53.064 "superblock": true, 00:14:53.064 "num_base_bdevs": 4, 00:14:53.064 "num_base_bdevs_discovered": 4, 00:14:53.064 "num_base_bdevs_operational": 4, 00:14:53.064 "process": { 00:14:53.064 "type": "rebuild", 00:14:53.064 "target": "spare", 00:14:53.064 "progress": { 00:14:53.064 "blocks": 174720, 00:14:53.064 "percent": 91 00:14:53.064 } 00:14:53.064 }, 00:14:53.064 "base_bdevs_list": [ 00:14:53.064 { 00:14:53.064 "name": "spare", 00:14:53.064 "uuid": "bc267183-b338-502a-b407-f3c4e79e07d4", 00:14:53.064 "is_configured": true, 00:14:53.064 "data_offset": 2048, 00:14:53.064 "data_size": 63488 00:14:53.064 }, 00:14:53.064 { 00:14:53.064 "name": "BaseBdev2", 00:14:53.064 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:53.064 "is_configured": true, 00:14:53.064 "data_offset": 2048, 00:14:53.064 "data_size": 63488 00:14:53.064 }, 00:14:53.064 { 00:14:53.064 "name": "BaseBdev3", 00:14:53.064 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:53.064 "is_configured": true, 00:14:53.064 "data_offset": 2048, 00:14:53.064 "data_size": 63488 00:14:53.064 }, 00:14:53.064 { 00:14:53.064 "name": "BaseBdev4", 00:14:53.064 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:53.064 "is_configured": true, 00:14:53.064 "data_offset": 2048, 00:14:53.064 "data_size": 63488 00:14:53.064 } 00:14:53.064 ] 00:14:53.064 }' 00:14:53.064 05:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.064 05:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.064 05:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.064 05:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.064 05:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:54.002 [2024-12-14 05:04:04.606760] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:54.002 [2024-12-14 05:04:04.606866] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:54.002 [2024-12-14 05:04:04.606988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.261 05:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:54.261 05:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.261 05:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.261 05:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.261 05:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.261 05:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.261 05:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.262 05:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.262 05:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.262 05:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.262 05:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.262 05:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.262 "name": "raid_bdev1", 00:14:54.262 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:54.262 "strip_size_kb": 64, 00:14:54.262 "state": "online", 00:14:54.262 "raid_level": "raid5f", 00:14:54.262 "superblock": true, 00:14:54.262 "num_base_bdevs": 4, 00:14:54.262 "num_base_bdevs_discovered": 4, 00:14:54.262 "num_base_bdevs_operational": 4, 00:14:54.262 "base_bdevs_list": [ 00:14:54.262 { 00:14:54.262 "name": "spare", 00:14:54.262 "uuid": "bc267183-b338-502a-b407-f3c4e79e07d4", 00:14:54.262 "is_configured": true, 00:14:54.262 "data_offset": 2048, 00:14:54.262 "data_size": 63488 00:14:54.262 }, 00:14:54.262 { 00:14:54.262 "name": "BaseBdev2", 00:14:54.262 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:54.262 "is_configured": true, 00:14:54.262 "data_offset": 2048, 00:14:54.262 "data_size": 63488 00:14:54.262 }, 00:14:54.262 { 00:14:54.262 "name": "BaseBdev3", 00:14:54.262 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:54.262 "is_configured": true, 00:14:54.262 "data_offset": 2048, 00:14:54.262 "data_size": 63488 00:14:54.262 }, 00:14:54.262 { 00:14:54.262 "name": "BaseBdev4", 00:14:54.262 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:54.262 "is_configured": true, 00:14:54.262 "data_offset": 2048, 00:14:54.262 "data_size": 63488 00:14:54.262 } 00:14:54.262 ] 00:14:54.262 }' 00:14:54.262 05:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.262 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:54.262 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.262 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:54.262 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:54.262 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:54.262 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.262 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:54.262 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:54.262 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.262 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.262 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.262 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.262 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.262 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.262 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.262 "name": "raid_bdev1", 00:14:54.262 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:54.262 "strip_size_kb": 64, 00:14:54.262 "state": "online", 00:14:54.262 "raid_level": "raid5f", 00:14:54.262 "superblock": true, 00:14:54.262 "num_base_bdevs": 4, 00:14:54.262 "num_base_bdevs_discovered": 4, 00:14:54.262 "num_base_bdevs_operational": 4, 00:14:54.262 "base_bdevs_list": [ 00:14:54.262 { 00:14:54.262 "name": "spare", 00:14:54.262 "uuid": "bc267183-b338-502a-b407-f3c4e79e07d4", 00:14:54.262 "is_configured": true, 00:14:54.262 "data_offset": 2048, 00:14:54.262 "data_size": 63488 00:14:54.262 }, 00:14:54.262 { 00:14:54.262 "name": "BaseBdev2", 00:14:54.262 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:54.262 "is_configured": true, 00:14:54.262 "data_offset": 2048, 00:14:54.262 "data_size": 63488 00:14:54.262 }, 00:14:54.262 { 00:14:54.262 "name": "BaseBdev3", 00:14:54.262 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:54.262 "is_configured": true, 00:14:54.262 "data_offset": 2048, 00:14:54.262 "data_size": 63488 00:14:54.262 }, 00:14:54.262 { 00:14:54.262 "name": "BaseBdev4", 00:14:54.262 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:54.262 "is_configured": true, 00:14:54.262 "data_offset": 2048, 00:14:54.262 "data_size": 63488 00:14:54.262 } 00:14:54.262 ] 00:14:54.262 }' 00:14:54.262 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.262 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:54.262 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.521 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:54.521 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:54.521 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.521 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.521 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.521 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.521 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.521 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.521 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.521 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.521 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.521 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.521 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.521 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.521 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.521 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.521 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.521 "name": "raid_bdev1", 00:14:54.521 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:54.521 "strip_size_kb": 64, 00:14:54.521 "state": "online", 00:14:54.521 "raid_level": "raid5f", 00:14:54.521 "superblock": true, 00:14:54.521 "num_base_bdevs": 4, 00:14:54.521 "num_base_bdevs_discovered": 4, 00:14:54.521 "num_base_bdevs_operational": 4, 00:14:54.521 "base_bdevs_list": [ 00:14:54.521 { 00:14:54.521 "name": "spare", 00:14:54.521 "uuid": "bc267183-b338-502a-b407-f3c4e79e07d4", 00:14:54.521 "is_configured": true, 00:14:54.521 "data_offset": 2048, 00:14:54.521 "data_size": 63488 00:14:54.521 }, 00:14:54.521 { 00:14:54.521 "name": "BaseBdev2", 00:14:54.521 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:54.521 "is_configured": true, 00:14:54.521 "data_offset": 2048, 00:14:54.521 "data_size": 63488 00:14:54.521 }, 00:14:54.521 { 00:14:54.521 "name": "BaseBdev3", 00:14:54.521 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:54.521 "is_configured": true, 00:14:54.521 "data_offset": 2048, 00:14:54.521 "data_size": 63488 00:14:54.521 }, 00:14:54.521 { 00:14:54.521 "name": "BaseBdev4", 00:14:54.521 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:54.521 "is_configured": true, 00:14:54.521 "data_offset": 2048, 00:14:54.521 "data_size": 63488 00:14:54.521 } 00:14:54.521 ] 00:14:54.521 }' 00:14:54.522 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.522 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.781 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:54.781 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.781 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.781 [2024-12-14 05:04:05.634268] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:54.781 [2024-12-14 05:04:05.634334] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:54.781 [2024-12-14 05:04:05.634430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.781 [2024-12-14 05:04:05.634534] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:54.781 [2024-12-14 05:04:05.634604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:54.781 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.781 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:54.781 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.781 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.781 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.781 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.040 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:55.040 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:55.040 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:55.040 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:55.040 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:55.040 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:55.040 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:55.040 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:55.040 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:55.040 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:55.040 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:55.040 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:55.040 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:55.040 /dev/nbd0 00:14:55.300 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:55.300 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:55.300 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:55.300 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:55.300 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:55.300 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:55.300 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:55.300 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:55.300 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:55.300 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:55.300 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:55.300 1+0 records in 00:14:55.300 1+0 records out 00:14:55.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419535 s, 9.8 MB/s 00:14:55.300 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.300 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:55.300 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.300 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:55.300 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:55.300 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:55.300 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:55.300 05:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:55.300 /dev/nbd1 00:14:55.559 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:55.559 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:55.559 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:55.559 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:55.559 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:55.559 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:55.559 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:55.559 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:55.559 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:55.559 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:55.559 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:55.559 1+0 records in 00:14:55.559 1+0 records out 00:14:55.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415367 s, 9.9 MB/s 00:14:55.559 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.559 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:55.560 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.560 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:55.560 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:55.560 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:55.560 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:55.560 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:55.560 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:55.560 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:55.560 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:55.560 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:55.560 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:55.560 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:55.560 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.819 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.079 [2024-12-14 05:04:06.699782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:56.079 [2024-12-14 05:04:06.699835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.079 [2024-12-14 05:04:06.699856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:56.079 [2024-12-14 05:04:06.699866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.079 [2024-12-14 05:04:06.701954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.079 [2024-12-14 05:04:06.702002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:56.079 [2024-12-14 05:04:06.702073] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:56.079 [2024-12-14 05:04:06.702111] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:56.079 [2024-12-14 05:04:06.702231] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:56.079 [2024-12-14 05:04:06.702342] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:56.079 [2024-12-14 05:04:06.702405] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:56.079 spare 00:14:56.079 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.079 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:56.079 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.079 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.079 [2024-12-14 05:04:06.802301] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:14:56.079 [2024-12-14 05:04:06.802324] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:56.079 [2024-12-14 05:04:06.802561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049030 00:14:56.079 [2024-12-14 05:04:06.802962] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:14:56.079 [2024-12-14 05:04:06.802979] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:14:56.079 [2024-12-14 05:04:06.803088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.079 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.079 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:56.079 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.079 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.079 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.079 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.079 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.079 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.079 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.079 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.079 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.079 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.079 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.079 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.079 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.079 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.079 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.079 "name": "raid_bdev1", 00:14:56.079 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:56.079 "strip_size_kb": 64, 00:14:56.079 "state": "online", 00:14:56.079 "raid_level": "raid5f", 00:14:56.079 "superblock": true, 00:14:56.079 "num_base_bdevs": 4, 00:14:56.079 "num_base_bdevs_discovered": 4, 00:14:56.079 "num_base_bdevs_operational": 4, 00:14:56.079 "base_bdevs_list": [ 00:14:56.079 { 00:14:56.079 "name": "spare", 00:14:56.079 "uuid": "bc267183-b338-502a-b407-f3c4e79e07d4", 00:14:56.079 "is_configured": true, 00:14:56.079 "data_offset": 2048, 00:14:56.079 "data_size": 63488 00:14:56.079 }, 00:14:56.079 { 00:14:56.079 "name": "BaseBdev2", 00:14:56.079 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:56.079 "is_configured": true, 00:14:56.079 "data_offset": 2048, 00:14:56.079 "data_size": 63488 00:14:56.079 }, 00:14:56.079 { 00:14:56.079 "name": "BaseBdev3", 00:14:56.079 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:56.079 "is_configured": true, 00:14:56.079 "data_offset": 2048, 00:14:56.079 "data_size": 63488 00:14:56.079 }, 00:14:56.079 { 00:14:56.079 "name": "BaseBdev4", 00:14:56.079 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:56.080 "is_configured": true, 00:14:56.080 "data_offset": 2048, 00:14:56.080 "data_size": 63488 00:14:56.080 } 00:14:56.080 ] 00:14:56.080 }' 00:14:56.080 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.080 05:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.649 "name": "raid_bdev1", 00:14:56.649 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:56.649 "strip_size_kb": 64, 00:14:56.649 "state": "online", 00:14:56.649 "raid_level": "raid5f", 00:14:56.649 "superblock": true, 00:14:56.649 "num_base_bdevs": 4, 00:14:56.649 "num_base_bdevs_discovered": 4, 00:14:56.649 "num_base_bdevs_operational": 4, 00:14:56.649 "base_bdevs_list": [ 00:14:56.649 { 00:14:56.649 "name": "spare", 00:14:56.649 "uuid": "bc267183-b338-502a-b407-f3c4e79e07d4", 00:14:56.649 "is_configured": true, 00:14:56.649 "data_offset": 2048, 00:14:56.649 "data_size": 63488 00:14:56.649 }, 00:14:56.649 { 00:14:56.649 "name": "BaseBdev2", 00:14:56.649 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:56.649 "is_configured": true, 00:14:56.649 "data_offset": 2048, 00:14:56.649 "data_size": 63488 00:14:56.649 }, 00:14:56.649 { 00:14:56.649 "name": "BaseBdev3", 00:14:56.649 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:56.649 "is_configured": true, 00:14:56.649 "data_offset": 2048, 00:14:56.649 "data_size": 63488 00:14:56.649 }, 00:14:56.649 { 00:14:56.649 "name": "BaseBdev4", 00:14:56.649 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:56.649 "is_configured": true, 00:14:56.649 "data_offset": 2048, 00:14:56.649 "data_size": 63488 00:14:56.649 } 00:14:56.649 ] 00:14:56.649 }' 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.649 [2024-12-14 05:04:07.395638] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.649 "name": "raid_bdev1", 00:14:56.649 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:56.649 "strip_size_kb": 64, 00:14:56.649 "state": "online", 00:14:56.649 "raid_level": "raid5f", 00:14:56.649 "superblock": true, 00:14:56.649 "num_base_bdevs": 4, 00:14:56.649 "num_base_bdevs_discovered": 3, 00:14:56.649 "num_base_bdevs_operational": 3, 00:14:56.649 "base_bdevs_list": [ 00:14:56.649 { 00:14:56.649 "name": null, 00:14:56.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.649 "is_configured": false, 00:14:56.649 "data_offset": 0, 00:14:56.649 "data_size": 63488 00:14:56.649 }, 00:14:56.649 { 00:14:56.649 "name": "BaseBdev2", 00:14:56.649 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:56.649 "is_configured": true, 00:14:56.649 "data_offset": 2048, 00:14:56.649 "data_size": 63488 00:14:56.649 }, 00:14:56.649 { 00:14:56.649 "name": "BaseBdev3", 00:14:56.649 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:56.649 "is_configured": true, 00:14:56.649 "data_offset": 2048, 00:14:56.649 "data_size": 63488 00:14:56.649 }, 00:14:56.649 { 00:14:56.649 "name": "BaseBdev4", 00:14:56.649 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:56.649 "is_configured": true, 00:14:56.649 "data_offset": 2048, 00:14:56.649 "data_size": 63488 00:14:56.649 } 00:14:56.649 ] 00:14:56.649 }' 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.649 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.218 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:57.218 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.218 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.218 [2024-12-14 05:04:07.819059] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:57.218 [2024-12-14 05:04:07.819239] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:57.218 [2024-12-14 05:04:07.819255] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:57.218 [2024-12-14 05:04:07.819293] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:57.218 [2024-12-14 05:04:07.822504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049100 00:14:57.218 [2024-12-14 05:04:07.824664] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:57.218 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.218 05:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:58.156 05:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.156 05:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.156 05:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.156 05:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.156 05:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.156 05:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.156 05:04:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.156 05:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.156 05:04:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.156 05:04:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.156 05:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.156 "name": "raid_bdev1", 00:14:58.156 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:58.156 "strip_size_kb": 64, 00:14:58.156 "state": "online", 00:14:58.156 "raid_level": "raid5f", 00:14:58.156 "superblock": true, 00:14:58.156 "num_base_bdevs": 4, 00:14:58.156 "num_base_bdevs_discovered": 4, 00:14:58.156 "num_base_bdevs_operational": 4, 00:14:58.156 "process": { 00:14:58.156 "type": "rebuild", 00:14:58.156 "target": "spare", 00:14:58.156 "progress": { 00:14:58.156 "blocks": 19200, 00:14:58.156 "percent": 10 00:14:58.156 } 00:14:58.156 }, 00:14:58.156 "base_bdevs_list": [ 00:14:58.156 { 00:14:58.156 "name": "spare", 00:14:58.156 "uuid": "bc267183-b338-502a-b407-f3c4e79e07d4", 00:14:58.156 "is_configured": true, 00:14:58.156 "data_offset": 2048, 00:14:58.156 "data_size": 63488 00:14:58.156 }, 00:14:58.156 { 00:14:58.156 "name": "BaseBdev2", 00:14:58.156 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:58.156 "is_configured": true, 00:14:58.156 "data_offset": 2048, 00:14:58.156 "data_size": 63488 00:14:58.156 }, 00:14:58.156 { 00:14:58.156 "name": "BaseBdev3", 00:14:58.156 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:58.156 "is_configured": true, 00:14:58.156 "data_offset": 2048, 00:14:58.156 "data_size": 63488 00:14:58.156 }, 00:14:58.156 { 00:14:58.156 "name": "BaseBdev4", 00:14:58.156 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:58.156 "is_configured": true, 00:14:58.156 "data_offset": 2048, 00:14:58.156 "data_size": 63488 00:14:58.156 } 00:14:58.156 ] 00:14:58.156 }' 00:14:58.157 05:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.157 05:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.157 05:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.157 05:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.157 05:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:58.157 05:04:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.157 05:04:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.157 [2024-12-14 05:04:08.995288] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:58.157 [2024-12-14 05:04:09.029690] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:58.157 [2024-12-14 05:04:09.029739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.157 [2024-12-14 05:04:09.029756] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:58.157 [2024-12-14 05:04:09.029763] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:58.416 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.416 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:58.416 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.416 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.416 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.416 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.416 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.416 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.416 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.416 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.416 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.416 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.416 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.416 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.416 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.416 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.416 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.416 "name": "raid_bdev1", 00:14:58.416 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:58.416 "strip_size_kb": 64, 00:14:58.416 "state": "online", 00:14:58.416 "raid_level": "raid5f", 00:14:58.416 "superblock": true, 00:14:58.416 "num_base_bdevs": 4, 00:14:58.416 "num_base_bdevs_discovered": 3, 00:14:58.416 "num_base_bdevs_operational": 3, 00:14:58.416 "base_bdevs_list": [ 00:14:58.416 { 00:14:58.416 "name": null, 00:14:58.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.416 "is_configured": false, 00:14:58.416 "data_offset": 0, 00:14:58.416 "data_size": 63488 00:14:58.416 }, 00:14:58.416 { 00:14:58.416 "name": "BaseBdev2", 00:14:58.416 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:58.416 "is_configured": true, 00:14:58.416 "data_offset": 2048, 00:14:58.416 "data_size": 63488 00:14:58.416 }, 00:14:58.416 { 00:14:58.416 "name": "BaseBdev3", 00:14:58.416 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:58.416 "is_configured": true, 00:14:58.416 "data_offset": 2048, 00:14:58.416 "data_size": 63488 00:14:58.416 }, 00:14:58.416 { 00:14:58.416 "name": "BaseBdev4", 00:14:58.416 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:58.416 "is_configured": true, 00:14:58.416 "data_offset": 2048, 00:14:58.416 "data_size": 63488 00:14:58.416 } 00:14:58.416 ] 00:14:58.416 }' 00:14:58.416 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.416 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.676 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:58.676 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.676 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.676 [2024-12-14 05:04:09.469738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:58.676 [2024-12-14 05:04:09.469825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.676 [2024-12-14 05:04:09.469867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:58.676 [2024-12-14 05:04:09.469895] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.676 [2024-12-14 05:04:09.470334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.676 [2024-12-14 05:04:09.470389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:58.676 [2024-12-14 05:04:09.470489] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:58.676 [2024-12-14 05:04:09.470525] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:58.676 [2024-12-14 05:04:09.470567] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:58.676 [2024-12-14 05:04:09.470643] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:58.676 [2024-12-14 05:04:09.473334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:14:58.676 [2024-12-14 05:04:09.475419] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:58.676 spare 00:14:58.676 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.676 05:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:59.613 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.613 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.613 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.613 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.613 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.613 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.613 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.613 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.613 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.873 "name": "raid_bdev1", 00:14:59.873 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:59.873 "strip_size_kb": 64, 00:14:59.873 "state": "online", 00:14:59.873 "raid_level": "raid5f", 00:14:59.873 "superblock": true, 00:14:59.873 "num_base_bdevs": 4, 00:14:59.873 "num_base_bdevs_discovered": 4, 00:14:59.873 "num_base_bdevs_operational": 4, 00:14:59.873 "process": { 00:14:59.873 "type": "rebuild", 00:14:59.873 "target": "spare", 00:14:59.873 "progress": { 00:14:59.873 "blocks": 19200, 00:14:59.873 "percent": 10 00:14:59.873 } 00:14:59.873 }, 00:14:59.873 "base_bdevs_list": [ 00:14:59.873 { 00:14:59.873 "name": "spare", 00:14:59.873 "uuid": "bc267183-b338-502a-b407-f3c4e79e07d4", 00:14:59.873 "is_configured": true, 00:14:59.873 "data_offset": 2048, 00:14:59.873 "data_size": 63488 00:14:59.873 }, 00:14:59.873 { 00:14:59.873 "name": "BaseBdev2", 00:14:59.873 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:59.873 "is_configured": true, 00:14:59.873 "data_offset": 2048, 00:14:59.873 "data_size": 63488 00:14:59.873 }, 00:14:59.873 { 00:14:59.873 "name": "BaseBdev3", 00:14:59.873 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:59.873 "is_configured": true, 00:14:59.873 "data_offset": 2048, 00:14:59.873 "data_size": 63488 00:14:59.873 }, 00:14:59.873 { 00:14:59.873 "name": "BaseBdev4", 00:14:59.873 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:59.873 "is_configured": true, 00:14:59.873 "data_offset": 2048, 00:14:59.873 "data_size": 63488 00:14:59.873 } 00:14:59.873 ] 00:14:59.873 }' 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.873 [2024-12-14 05:04:10.641888] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:59.873 [2024-12-14 05:04:10.680571] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:59.873 [2024-12-14 05:04:10.680622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.873 [2024-12-14 05:04:10.680637] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:59.873 [2024-12-14 05:04:10.680645] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.873 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.873 "name": "raid_bdev1", 00:14:59.873 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:14:59.873 "strip_size_kb": 64, 00:14:59.873 "state": "online", 00:14:59.873 "raid_level": "raid5f", 00:14:59.873 "superblock": true, 00:14:59.873 "num_base_bdevs": 4, 00:14:59.873 "num_base_bdevs_discovered": 3, 00:14:59.873 "num_base_bdevs_operational": 3, 00:14:59.873 "base_bdevs_list": [ 00:14:59.873 { 00:14:59.873 "name": null, 00:14:59.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.873 "is_configured": false, 00:14:59.873 "data_offset": 0, 00:14:59.873 "data_size": 63488 00:14:59.873 }, 00:14:59.873 { 00:14:59.874 "name": "BaseBdev2", 00:14:59.874 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:14:59.874 "is_configured": true, 00:14:59.874 "data_offset": 2048, 00:14:59.874 "data_size": 63488 00:14:59.874 }, 00:14:59.874 { 00:14:59.874 "name": "BaseBdev3", 00:14:59.874 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:14:59.874 "is_configured": true, 00:14:59.874 "data_offset": 2048, 00:14:59.874 "data_size": 63488 00:14:59.874 }, 00:14:59.874 { 00:14:59.874 "name": "BaseBdev4", 00:14:59.874 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:14:59.874 "is_configured": true, 00:14:59.874 "data_offset": 2048, 00:14:59.874 "data_size": 63488 00:14:59.874 } 00:14:59.874 ] 00:14:59.874 }' 00:14:59.874 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.874 05:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.442 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:00.442 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.442 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:00.442 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:00.442 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.442 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.442 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.442 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.442 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.442 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.442 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.442 "name": "raid_bdev1", 00:15:00.442 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:15:00.442 "strip_size_kb": 64, 00:15:00.442 "state": "online", 00:15:00.442 "raid_level": "raid5f", 00:15:00.442 "superblock": true, 00:15:00.442 "num_base_bdevs": 4, 00:15:00.442 "num_base_bdevs_discovered": 3, 00:15:00.442 "num_base_bdevs_operational": 3, 00:15:00.442 "base_bdevs_list": [ 00:15:00.442 { 00:15:00.442 "name": null, 00:15:00.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.442 "is_configured": false, 00:15:00.442 "data_offset": 0, 00:15:00.442 "data_size": 63488 00:15:00.442 }, 00:15:00.442 { 00:15:00.442 "name": "BaseBdev2", 00:15:00.442 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:15:00.442 "is_configured": true, 00:15:00.442 "data_offset": 2048, 00:15:00.442 "data_size": 63488 00:15:00.442 }, 00:15:00.442 { 00:15:00.442 "name": "BaseBdev3", 00:15:00.442 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:15:00.442 "is_configured": true, 00:15:00.442 "data_offset": 2048, 00:15:00.442 "data_size": 63488 00:15:00.442 }, 00:15:00.442 { 00:15:00.442 "name": "BaseBdev4", 00:15:00.442 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:15:00.442 "is_configured": true, 00:15:00.442 "data_offset": 2048, 00:15:00.442 "data_size": 63488 00:15:00.442 } 00:15:00.442 ] 00:15:00.442 }' 00:15:00.442 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.442 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:00.442 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.443 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:00.443 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:00.443 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.443 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.702 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.702 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:00.702 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.702 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.702 [2024-12-14 05:04:11.340105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:00.702 [2024-12-14 05:04:11.340173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.702 [2024-12-14 05:04:11.340193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:00.702 [2024-12-14 05:04:11.340206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.702 [2024-12-14 05:04:11.340619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.702 [2024-12-14 05:04:11.340707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:00.702 [2024-12-14 05:04:11.340779] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:00.702 [2024-12-14 05:04:11.340796] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:00.702 [2024-12-14 05:04:11.340813] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:00.702 [2024-12-14 05:04:11.340825] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:00.702 BaseBdev1 00:15:00.702 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.702 05:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:01.640 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:01.640 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.640 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.640 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.640 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.640 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.640 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.640 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.640 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.640 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.640 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.640 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.640 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.640 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.640 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.640 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.640 "name": "raid_bdev1", 00:15:01.640 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:15:01.640 "strip_size_kb": 64, 00:15:01.640 "state": "online", 00:15:01.640 "raid_level": "raid5f", 00:15:01.640 "superblock": true, 00:15:01.640 "num_base_bdevs": 4, 00:15:01.640 "num_base_bdevs_discovered": 3, 00:15:01.640 "num_base_bdevs_operational": 3, 00:15:01.640 "base_bdevs_list": [ 00:15:01.640 { 00:15:01.640 "name": null, 00:15:01.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.640 "is_configured": false, 00:15:01.640 "data_offset": 0, 00:15:01.640 "data_size": 63488 00:15:01.640 }, 00:15:01.640 { 00:15:01.640 "name": "BaseBdev2", 00:15:01.640 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:15:01.640 "is_configured": true, 00:15:01.640 "data_offset": 2048, 00:15:01.640 "data_size": 63488 00:15:01.640 }, 00:15:01.640 { 00:15:01.640 "name": "BaseBdev3", 00:15:01.640 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:15:01.640 "is_configured": true, 00:15:01.640 "data_offset": 2048, 00:15:01.640 "data_size": 63488 00:15:01.640 }, 00:15:01.640 { 00:15:01.640 "name": "BaseBdev4", 00:15:01.640 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:15:01.640 "is_configured": true, 00:15:01.640 "data_offset": 2048, 00:15:01.640 "data_size": 63488 00:15:01.640 } 00:15:01.640 ] 00:15:01.640 }' 00:15:01.640 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.640 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.899 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:01.899 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.899 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:01.899 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:01.899 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.899 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.899 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.899 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.899 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.159 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.159 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.159 "name": "raid_bdev1", 00:15:02.159 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:15:02.159 "strip_size_kb": 64, 00:15:02.159 "state": "online", 00:15:02.159 "raid_level": "raid5f", 00:15:02.159 "superblock": true, 00:15:02.159 "num_base_bdevs": 4, 00:15:02.159 "num_base_bdevs_discovered": 3, 00:15:02.159 "num_base_bdevs_operational": 3, 00:15:02.159 "base_bdevs_list": [ 00:15:02.159 { 00:15:02.159 "name": null, 00:15:02.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.159 "is_configured": false, 00:15:02.159 "data_offset": 0, 00:15:02.159 "data_size": 63488 00:15:02.159 }, 00:15:02.159 { 00:15:02.159 "name": "BaseBdev2", 00:15:02.159 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:15:02.159 "is_configured": true, 00:15:02.159 "data_offset": 2048, 00:15:02.159 "data_size": 63488 00:15:02.159 }, 00:15:02.159 { 00:15:02.159 "name": "BaseBdev3", 00:15:02.159 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:15:02.159 "is_configured": true, 00:15:02.159 "data_offset": 2048, 00:15:02.159 "data_size": 63488 00:15:02.159 }, 00:15:02.159 { 00:15:02.159 "name": "BaseBdev4", 00:15:02.159 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:15:02.159 "is_configured": true, 00:15:02.159 "data_offset": 2048, 00:15:02.159 "data_size": 63488 00:15:02.159 } 00:15:02.159 ] 00:15:02.159 }' 00:15:02.159 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.159 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:02.159 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.159 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:02.159 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:02.159 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:02.159 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:02.159 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:02.159 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:02.159 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:02.159 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:02.159 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:02.159 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.159 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.159 [2024-12-14 05:04:12.929467] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:02.159 [2024-12-14 05:04:12.929584] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:02.159 [2024-12-14 05:04:12.929597] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:02.159 request: 00:15:02.159 { 00:15:02.159 "base_bdev": "BaseBdev1", 00:15:02.159 "raid_bdev": "raid_bdev1", 00:15:02.159 "method": "bdev_raid_add_base_bdev", 00:15:02.159 "req_id": 1 00:15:02.159 } 00:15:02.159 Got JSON-RPC error response 00:15:02.159 response: 00:15:02.159 { 00:15:02.159 "code": -22, 00:15:02.159 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:02.159 } 00:15:02.159 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:02.159 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:02.159 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:02.159 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:02.159 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:02.159 05:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:03.097 05:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:03.097 05:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.097 05:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.097 05:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.097 05:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.097 05:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.097 05:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.097 05:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.097 05:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.097 05:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.097 05:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.097 05:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.097 05:04:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.097 05:04:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.097 05:04:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.358 05:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.358 "name": "raid_bdev1", 00:15:03.358 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:15:03.358 "strip_size_kb": 64, 00:15:03.358 "state": "online", 00:15:03.358 "raid_level": "raid5f", 00:15:03.358 "superblock": true, 00:15:03.358 "num_base_bdevs": 4, 00:15:03.358 "num_base_bdevs_discovered": 3, 00:15:03.358 "num_base_bdevs_operational": 3, 00:15:03.358 "base_bdevs_list": [ 00:15:03.358 { 00:15:03.358 "name": null, 00:15:03.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.358 "is_configured": false, 00:15:03.358 "data_offset": 0, 00:15:03.359 "data_size": 63488 00:15:03.359 }, 00:15:03.359 { 00:15:03.359 "name": "BaseBdev2", 00:15:03.359 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:15:03.359 "is_configured": true, 00:15:03.359 "data_offset": 2048, 00:15:03.359 "data_size": 63488 00:15:03.359 }, 00:15:03.359 { 00:15:03.359 "name": "BaseBdev3", 00:15:03.359 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:15:03.359 "is_configured": true, 00:15:03.359 "data_offset": 2048, 00:15:03.359 "data_size": 63488 00:15:03.359 }, 00:15:03.359 { 00:15:03.359 "name": "BaseBdev4", 00:15:03.359 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:15:03.359 "is_configured": true, 00:15:03.359 "data_offset": 2048, 00:15:03.359 "data_size": 63488 00:15:03.359 } 00:15:03.359 ] 00:15:03.359 }' 00:15:03.359 05:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.359 05:04:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.618 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:03.618 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.618 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:03.618 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:03.618 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.618 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.618 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.618 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.618 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.618 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.618 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.618 "name": "raid_bdev1", 00:15:03.618 "uuid": "1f71498e-7b1a-46f6-ad6a-b6eb557a2406", 00:15:03.618 "strip_size_kb": 64, 00:15:03.618 "state": "online", 00:15:03.618 "raid_level": "raid5f", 00:15:03.618 "superblock": true, 00:15:03.618 "num_base_bdevs": 4, 00:15:03.618 "num_base_bdevs_discovered": 3, 00:15:03.618 "num_base_bdevs_operational": 3, 00:15:03.618 "base_bdevs_list": [ 00:15:03.618 { 00:15:03.618 "name": null, 00:15:03.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.618 "is_configured": false, 00:15:03.618 "data_offset": 0, 00:15:03.618 "data_size": 63488 00:15:03.618 }, 00:15:03.618 { 00:15:03.619 "name": "BaseBdev2", 00:15:03.619 "uuid": "b74f4d5f-bb8e-5fed-9380-2160a1e607fd", 00:15:03.619 "is_configured": true, 00:15:03.619 "data_offset": 2048, 00:15:03.619 "data_size": 63488 00:15:03.619 }, 00:15:03.619 { 00:15:03.619 "name": "BaseBdev3", 00:15:03.619 "uuid": "d1d5fa7f-c375-5f0c-8f2c-bc91af13fcb9", 00:15:03.619 "is_configured": true, 00:15:03.619 "data_offset": 2048, 00:15:03.619 "data_size": 63488 00:15:03.619 }, 00:15:03.619 { 00:15:03.619 "name": "BaseBdev4", 00:15:03.619 "uuid": "28d210fc-6d96-52f9-aad9-d943379a0278", 00:15:03.619 "is_configured": true, 00:15:03.619 "data_offset": 2048, 00:15:03.619 "data_size": 63488 00:15:03.619 } 00:15:03.619 ] 00:15:03.619 }' 00:15:03.619 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.619 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:03.619 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.619 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:03.619 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 95540 00:15:03.619 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 95540 ']' 00:15:03.619 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 95540 00:15:03.619 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:03.619 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:03.619 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95540 00:15:03.878 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:03.878 killing process with pid 95540 00:15:03.878 Received shutdown signal, test time was about 60.000000 seconds 00:15:03.878 00:15:03.878 Latency(us) 00:15:03.878 [2024-12-14T05:04:14.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.878 [2024-12-14T05:04:14.761Z] =================================================================================================================== 00:15:03.878 [2024-12-14T05:04:14.761Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:03.878 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:03.878 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95540' 00:15:03.878 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 95540 00:15:03.878 [2024-12-14 05:04:14.521296] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:03.878 [2024-12-14 05:04:14.521391] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.878 [2024-12-14 05:04:14.521458] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.878 [2024-12-14 05:04:14.521467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:03.878 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 95540 00:15:03.878 [2024-12-14 05:04:14.571935] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:04.138 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:04.138 00:15:04.138 real 0m25.184s 00:15:04.138 user 0m31.902s 00:15:04.138 sys 0m3.141s 00:15:04.138 ************************************ 00:15:04.138 END TEST raid5f_rebuild_test_sb 00:15:04.138 ************************************ 00:15:04.138 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:04.138 05:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.138 05:04:14 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:15:04.138 05:04:14 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:15:04.138 05:04:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:04.138 05:04:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:04.138 05:04:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:04.138 ************************************ 00:15:04.138 START TEST raid_state_function_test_sb_4k 00:15:04.138 ************************************ 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=96332 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 96332' 00:15:04.138 Process raid pid: 96332 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 96332 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96332 ']' 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:04.138 05:04:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.138 [2024-12-14 05:04:14.982807] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:04.139 [2024-12-14 05:04:14.983017] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.398 [2024-12-14 05:04:15.144255] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.398 [2024-12-14 05:04:15.190129] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.398 [2024-12-14 05:04:15.232750] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.398 [2024-12-14 05:04:15.232867] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.966 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:04.966 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:15:04.966 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:04.966 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.966 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.966 [2024-12-14 05:04:15.806611] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:04.966 [2024-12-14 05:04:15.806662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:04.966 [2024-12-14 05:04:15.806681] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:04.966 [2024-12-14 05:04:15.806693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:04.966 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.966 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:04.966 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.966 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.966 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.966 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.966 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:04.966 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.966 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.966 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.966 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.966 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.966 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.966 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.966 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.966 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.225 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.225 "name": "Existed_Raid", 00:15:05.225 "uuid": "b14935b6-4646-4d1b-b31e-6d8140667b28", 00:15:05.225 "strip_size_kb": 0, 00:15:05.225 "state": "configuring", 00:15:05.225 "raid_level": "raid1", 00:15:05.225 "superblock": true, 00:15:05.225 "num_base_bdevs": 2, 00:15:05.225 "num_base_bdevs_discovered": 0, 00:15:05.225 "num_base_bdevs_operational": 2, 00:15:05.225 "base_bdevs_list": [ 00:15:05.225 { 00:15:05.225 "name": "BaseBdev1", 00:15:05.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.225 "is_configured": false, 00:15:05.225 "data_offset": 0, 00:15:05.225 "data_size": 0 00:15:05.225 }, 00:15:05.225 { 00:15:05.225 "name": "BaseBdev2", 00:15:05.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.225 "is_configured": false, 00:15:05.225 "data_offset": 0, 00:15:05.225 "data_size": 0 00:15:05.225 } 00:15:05.225 ] 00:15:05.225 }' 00:15:05.225 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.225 05:04:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.485 [2024-12-14 05:04:16.277705] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:05.485 [2024-12-14 05:04:16.277786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.485 [2024-12-14 05:04:16.289718] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:05.485 [2024-12-14 05:04:16.289784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:05.485 [2024-12-14 05:04:16.289809] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:05.485 [2024-12-14 05:04:16.289829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.485 [2024-12-14 05:04:16.310523] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:05.485 BaseBdev1 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.485 [ 00:15:05.485 { 00:15:05.485 "name": "BaseBdev1", 00:15:05.485 "aliases": [ 00:15:05.485 "ababc9f7-7a4f-46e3-875f-0dcfce4fb0ff" 00:15:05.485 ], 00:15:05.485 "product_name": "Malloc disk", 00:15:05.485 "block_size": 4096, 00:15:05.485 "num_blocks": 8192, 00:15:05.485 "uuid": "ababc9f7-7a4f-46e3-875f-0dcfce4fb0ff", 00:15:05.485 "assigned_rate_limits": { 00:15:05.485 "rw_ios_per_sec": 0, 00:15:05.485 "rw_mbytes_per_sec": 0, 00:15:05.485 "r_mbytes_per_sec": 0, 00:15:05.485 "w_mbytes_per_sec": 0 00:15:05.485 }, 00:15:05.485 "claimed": true, 00:15:05.485 "claim_type": "exclusive_write", 00:15:05.485 "zoned": false, 00:15:05.485 "supported_io_types": { 00:15:05.485 "read": true, 00:15:05.485 "write": true, 00:15:05.485 "unmap": true, 00:15:05.485 "flush": true, 00:15:05.485 "reset": true, 00:15:05.485 "nvme_admin": false, 00:15:05.485 "nvme_io": false, 00:15:05.485 "nvme_io_md": false, 00:15:05.485 "write_zeroes": true, 00:15:05.485 "zcopy": true, 00:15:05.485 "get_zone_info": false, 00:15:05.485 "zone_management": false, 00:15:05.485 "zone_append": false, 00:15:05.485 "compare": false, 00:15:05.485 "compare_and_write": false, 00:15:05.485 "abort": true, 00:15:05.485 "seek_hole": false, 00:15:05.485 "seek_data": false, 00:15:05.485 "copy": true, 00:15:05.485 "nvme_iov_md": false 00:15:05.485 }, 00:15:05.485 "memory_domains": [ 00:15:05.485 { 00:15:05.485 "dma_device_id": "system", 00:15:05.485 "dma_device_type": 1 00:15:05.485 }, 00:15:05.485 { 00:15:05.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.485 "dma_device_type": 2 00:15:05.485 } 00:15:05.485 ], 00:15:05.485 "driver_specific": {} 00:15:05.485 } 00:15:05.485 ] 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:05.485 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:05.486 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.486 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.486 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.486 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.486 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.486 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.486 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.486 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.486 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.486 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.486 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.486 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.486 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.745 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.745 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.745 "name": "Existed_Raid", 00:15:05.745 "uuid": "ea861397-1027-4c49-b7a1-a102227bd2c3", 00:15:05.745 "strip_size_kb": 0, 00:15:05.745 "state": "configuring", 00:15:05.745 "raid_level": "raid1", 00:15:05.745 "superblock": true, 00:15:05.745 "num_base_bdevs": 2, 00:15:05.745 "num_base_bdevs_discovered": 1, 00:15:05.745 "num_base_bdevs_operational": 2, 00:15:05.745 "base_bdevs_list": [ 00:15:05.745 { 00:15:05.745 "name": "BaseBdev1", 00:15:05.745 "uuid": "ababc9f7-7a4f-46e3-875f-0dcfce4fb0ff", 00:15:05.745 "is_configured": true, 00:15:05.745 "data_offset": 256, 00:15:05.745 "data_size": 7936 00:15:05.745 }, 00:15:05.745 { 00:15:05.745 "name": "BaseBdev2", 00:15:05.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.745 "is_configured": false, 00:15:05.745 "data_offset": 0, 00:15:05.745 "data_size": 0 00:15:05.745 } 00:15:05.745 ] 00:15:05.745 }' 00:15:05.745 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.745 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.004 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:06.004 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.004 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.004 [2024-12-14 05:04:16.825638] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:06.004 [2024-12-14 05:04:16.825678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:15:06.004 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.004 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:06.004 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.004 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.004 [2024-12-14 05:04:16.837645] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:06.004 [2024-12-14 05:04:16.839372] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:06.004 [2024-12-14 05:04:16.839409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:06.004 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.005 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:06.005 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:06.005 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:06.005 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.005 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.005 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.005 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.005 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:06.005 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.005 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.005 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.005 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.005 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.005 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.005 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.005 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.005 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.263 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.263 "name": "Existed_Raid", 00:15:06.263 "uuid": "76b8d9ae-a278-405e-b9af-2778f0ff94d8", 00:15:06.263 "strip_size_kb": 0, 00:15:06.263 "state": "configuring", 00:15:06.263 "raid_level": "raid1", 00:15:06.263 "superblock": true, 00:15:06.263 "num_base_bdevs": 2, 00:15:06.264 "num_base_bdevs_discovered": 1, 00:15:06.264 "num_base_bdevs_operational": 2, 00:15:06.264 "base_bdevs_list": [ 00:15:06.264 { 00:15:06.264 "name": "BaseBdev1", 00:15:06.264 "uuid": "ababc9f7-7a4f-46e3-875f-0dcfce4fb0ff", 00:15:06.264 "is_configured": true, 00:15:06.264 "data_offset": 256, 00:15:06.264 "data_size": 7936 00:15:06.264 }, 00:15:06.264 { 00:15:06.264 "name": "BaseBdev2", 00:15:06.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.264 "is_configured": false, 00:15:06.264 "data_offset": 0, 00:15:06.264 "data_size": 0 00:15:06.264 } 00:15:06.264 ] 00:15:06.264 }' 00:15:06.264 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.264 05:04:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.523 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:15:06.523 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.523 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.523 [2024-12-14 05:04:17.299609] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:06.523 [2024-12-14 05:04:17.300257] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:06.523 [2024-12-14 05:04:17.300327] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:06.523 BaseBdev2 00:15:06.523 [2024-12-14 05:04:17.301243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:06.523 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.523 [2024-12-14 05:04:17.301748] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:06.523 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:06.523 [2024-12-14 05:04:17.301815] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:15:06.523 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:06.523 [2024-12-14 05:04:17.302285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.523 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:06.523 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:06.523 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:06.523 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:06.523 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:06.523 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.523 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.523 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.523 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:06.523 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.523 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.523 [ 00:15:06.523 { 00:15:06.523 "name": "BaseBdev2", 00:15:06.523 "aliases": [ 00:15:06.523 "65371158-435e-4cc8-a667-8f85ed8f567a" 00:15:06.523 ], 00:15:06.523 "product_name": "Malloc disk", 00:15:06.523 "block_size": 4096, 00:15:06.523 "num_blocks": 8192, 00:15:06.523 "uuid": "65371158-435e-4cc8-a667-8f85ed8f567a", 00:15:06.523 "assigned_rate_limits": { 00:15:06.523 "rw_ios_per_sec": 0, 00:15:06.523 "rw_mbytes_per_sec": 0, 00:15:06.523 "r_mbytes_per_sec": 0, 00:15:06.523 "w_mbytes_per_sec": 0 00:15:06.523 }, 00:15:06.523 "claimed": true, 00:15:06.523 "claim_type": "exclusive_write", 00:15:06.523 "zoned": false, 00:15:06.523 "supported_io_types": { 00:15:06.523 "read": true, 00:15:06.523 "write": true, 00:15:06.523 "unmap": true, 00:15:06.523 "flush": true, 00:15:06.523 "reset": true, 00:15:06.523 "nvme_admin": false, 00:15:06.523 "nvme_io": false, 00:15:06.523 "nvme_io_md": false, 00:15:06.523 "write_zeroes": true, 00:15:06.523 "zcopy": true, 00:15:06.523 "get_zone_info": false, 00:15:06.523 "zone_management": false, 00:15:06.523 "zone_append": false, 00:15:06.523 "compare": false, 00:15:06.523 "compare_and_write": false, 00:15:06.523 "abort": true, 00:15:06.523 "seek_hole": false, 00:15:06.524 "seek_data": false, 00:15:06.524 "copy": true, 00:15:06.524 "nvme_iov_md": false 00:15:06.524 }, 00:15:06.524 "memory_domains": [ 00:15:06.524 { 00:15:06.524 "dma_device_id": "system", 00:15:06.524 "dma_device_type": 1 00:15:06.524 }, 00:15:06.524 { 00:15:06.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.524 "dma_device_type": 2 00:15:06.524 } 00:15:06.524 ], 00:15:06.524 "driver_specific": {} 00:15:06.524 } 00:15:06.524 ] 00:15:06.524 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.524 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:06.524 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:06.524 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:06.524 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:06.524 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.524 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.524 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.524 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.524 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:06.524 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.524 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.524 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.524 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.524 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.524 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.524 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.524 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.524 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.524 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.524 "name": "Existed_Raid", 00:15:06.524 "uuid": "76b8d9ae-a278-405e-b9af-2778f0ff94d8", 00:15:06.524 "strip_size_kb": 0, 00:15:06.524 "state": "online", 00:15:06.524 "raid_level": "raid1", 00:15:06.524 "superblock": true, 00:15:06.524 "num_base_bdevs": 2, 00:15:06.524 "num_base_bdevs_discovered": 2, 00:15:06.524 "num_base_bdevs_operational": 2, 00:15:06.524 "base_bdevs_list": [ 00:15:06.524 { 00:15:06.524 "name": "BaseBdev1", 00:15:06.524 "uuid": "ababc9f7-7a4f-46e3-875f-0dcfce4fb0ff", 00:15:06.524 "is_configured": true, 00:15:06.524 "data_offset": 256, 00:15:06.524 "data_size": 7936 00:15:06.524 }, 00:15:06.524 { 00:15:06.524 "name": "BaseBdev2", 00:15:06.524 "uuid": "65371158-435e-4cc8-a667-8f85ed8f567a", 00:15:06.524 "is_configured": true, 00:15:06.524 "data_offset": 256, 00:15:06.524 "data_size": 7936 00:15:06.524 } 00:15:06.524 ] 00:15:06.524 }' 00:15:06.524 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.524 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.093 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:07.093 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:07.093 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:07.093 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:07.093 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:07.093 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:07.093 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:07.093 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.093 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.093 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:07.093 [2024-12-14 05:04:17.827046] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.093 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.093 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:07.093 "name": "Existed_Raid", 00:15:07.093 "aliases": [ 00:15:07.093 "76b8d9ae-a278-405e-b9af-2778f0ff94d8" 00:15:07.093 ], 00:15:07.093 "product_name": "Raid Volume", 00:15:07.093 "block_size": 4096, 00:15:07.093 "num_blocks": 7936, 00:15:07.093 "uuid": "76b8d9ae-a278-405e-b9af-2778f0ff94d8", 00:15:07.093 "assigned_rate_limits": { 00:15:07.093 "rw_ios_per_sec": 0, 00:15:07.093 "rw_mbytes_per_sec": 0, 00:15:07.093 "r_mbytes_per_sec": 0, 00:15:07.093 "w_mbytes_per_sec": 0 00:15:07.093 }, 00:15:07.093 "claimed": false, 00:15:07.093 "zoned": false, 00:15:07.093 "supported_io_types": { 00:15:07.093 "read": true, 00:15:07.093 "write": true, 00:15:07.093 "unmap": false, 00:15:07.093 "flush": false, 00:15:07.093 "reset": true, 00:15:07.093 "nvme_admin": false, 00:15:07.093 "nvme_io": false, 00:15:07.093 "nvme_io_md": false, 00:15:07.093 "write_zeroes": true, 00:15:07.093 "zcopy": false, 00:15:07.093 "get_zone_info": false, 00:15:07.093 "zone_management": false, 00:15:07.093 "zone_append": false, 00:15:07.093 "compare": false, 00:15:07.093 "compare_and_write": false, 00:15:07.093 "abort": false, 00:15:07.093 "seek_hole": false, 00:15:07.093 "seek_data": false, 00:15:07.093 "copy": false, 00:15:07.093 "nvme_iov_md": false 00:15:07.093 }, 00:15:07.093 "memory_domains": [ 00:15:07.093 { 00:15:07.093 "dma_device_id": "system", 00:15:07.093 "dma_device_type": 1 00:15:07.093 }, 00:15:07.093 { 00:15:07.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.093 "dma_device_type": 2 00:15:07.093 }, 00:15:07.093 { 00:15:07.093 "dma_device_id": "system", 00:15:07.093 "dma_device_type": 1 00:15:07.093 }, 00:15:07.093 { 00:15:07.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.093 "dma_device_type": 2 00:15:07.093 } 00:15:07.093 ], 00:15:07.093 "driver_specific": { 00:15:07.093 "raid": { 00:15:07.093 "uuid": "76b8d9ae-a278-405e-b9af-2778f0ff94d8", 00:15:07.093 "strip_size_kb": 0, 00:15:07.093 "state": "online", 00:15:07.093 "raid_level": "raid1", 00:15:07.093 "superblock": true, 00:15:07.093 "num_base_bdevs": 2, 00:15:07.093 "num_base_bdevs_discovered": 2, 00:15:07.093 "num_base_bdevs_operational": 2, 00:15:07.093 "base_bdevs_list": [ 00:15:07.093 { 00:15:07.093 "name": "BaseBdev1", 00:15:07.093 "uuid": "ababc9f7-7a4f-46e3-875f-0dcfce4fb0ff", 00:15:07.093 "is_configured": true, 00:15:07.093 "data_offset": 256, 00:15:07.093 "data_size": 7936 00:15:07.093 }, 00:15:07.093 { 00:15:07.093 "name": "BaseBdev2", 00:15:07.094 "uuid": "65371158-435e-4cc8-a667-8f85ed8f567a", 00:15:07.094 "is_configured": true, 00:15:07.094 "data_offset": 256, 00:15:07.094 "data_size": 7936 00:15:07.094 } 00:15:07.094 ] 00:15:07.094 } 00:15:07.094 } 00:15:07.094 }' 00:15:07.094 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:07.094 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:07.094 BaseBdev2' 00:15:07.094 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.094 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:07.094 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.094 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:07.094 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.094 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.353 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.353 05:04:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.353 [2024-12-14 05:04:18.070404] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.353 "name": "Existed_Raid", 00:15:07.353 "uuid": "76b8d9ae-a278-405e-b9af-2778f0ff94d8", 00:15:07.353 "strip_size_kb": 0, 00:15:07.353 "state": "online", 00:15:07.353 "raid_level": "raid1", 00:15:07.353 "superblock": true, 00:15:07.353 "num_base_bdevs": 2, 00:15:07.353 "num_base_bdevs_discovered": 1, 00:15:07.353 "num_base_bdevs_operational": 1, 00:15:07.353 "base_bdevs_list": [ 00:15:07.353 { 00:15:07.353 "name": null, 00:15:07.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.353 "is_configured": false, 00:15:07.353 "data_offset": 0, 00:15:07.353 "data_size": 7936 00:15:07.353 }, 00:15:07.353 { 00:15:07.353 "name": "BaseBdev2", 00:15:07.353 "uuid": "65371158-435e-4cc8-a667-8f85ed8f567a", 00:15:07.353 "is_configured": true, 00:15:07.353 "data_offset": 256, 00:15:07.353 "data_size": 7936 00:15:07.353 } 00:15:07.353 ] 00:15:07.353 }' 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.353 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.922 [2024-12-14 05:04:18.572844] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:07.922 [2024-12-14 05:04:18.572942] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:07.922 [2024-12-14 05:04:18.584625] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.922 [2024-12-14 05:04:18.584676] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.922 [2024-12-14 05:04:18.584687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 96332 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96332 ']' 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96332 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96332 00:15:07.922 killing process with pid 96332 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96332' 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96332 00:15:07.922 [2024-12-14 05:04:18.666982] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:07.922 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96332 00:15:07.922 [2024-12-14 05:04:18.667968] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:08.182 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:15:08.182 00:15:08.182 real 0m4.021s 00:15:08.182 user 0m6.308s 00:15:08.182 sys 0m0.849s 00:15:08.182 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:08.182 05:04:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.183 ************************************ 00:15:08.183 END TEST raid_state_function_test_sb_4k 00:15:08.183 ************************************ 00:15:08.183 05:04:18 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:15:08.183 05:04:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:08.183 05:04:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:08.183 05:04:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:08.183 ************************************ 00:15:08.183 START TEST raid_superblock_test_4k 00:15:08.183 ************************************ 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=96568 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 96568 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 96568 ']' 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:08.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:08.183 05:04:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.443 [2024-12-14 05:04:19.073592] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:08.443 [2024-12-14 05:04:19.073729] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96568 ] 00:15:08.443 [2024-12-14 05:04:19.232223] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.443 [2024-12-14 05:04:19.276869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.443 [2024-12-14 05:04:19.319703] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.443 [2024-12-14 05:04:19.319744] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.381 malloc1 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.381 [2024-12-14 05:04:19.926404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:09.381 [2024-12-14 05:04:19.926471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.381 [2024-12-14 05:04:19.926493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:09.381 [2024-12-14 05:04:19.926506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.381 [2024-12-14 05:04:19.928586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.381 [2024-12-14 05:04:19.928626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:09.381 pt1 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.381 malloc2 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.381 [2024-12-14 05:04:19.963356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:09.381 [2024-12-14 05:04:19.963408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.381 [2024-12-14 05:04:19.963424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:09.381 [2024-12-14 05:04:19.963433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.381 [2024-12-14 05:04:19.965356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.381 [2024-12-14 05:04:19.965393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:09.381 pt2 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.381 [2024-12-14 05:04:19.975380] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:09.381 [2024-12-14 05:04:19.977078] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:09.381 [2024-12-14 05:04:19.977224] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:09.381 [2024-12-14 05:04:19.977243] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:09.381 [2024-12-14 05:04:19.977476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:09.381 [2024-12-14 05:04:19.977612] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:09.381 [2024-12-14 05:04:19.977629] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:09.381 [2024-12-14 05:04:19.977744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.381 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.382 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.382 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:09.382 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.382 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.382 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.382 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.382 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.382 05:04:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.382 05:04:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.382 05:04:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.382 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.382 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.382 "name": "raid_bdev1", 00:15:09.382 "uuid": "b9ed2ef2-ade1-4af1-aa54-4adec0002a6d", 00:15:09.382 "strip_size_kb": 0, 00:15:09.382 "state": "online", 00:15:09.382 "raid_level": "raid1", 00:15:09.382 "superblock": true, 00:15:09.382 "num_base_bdevs": 2, 00:15:09.382 "num_base_bdevs_discovered": 2, 00:15:09.382 "num_base_bdevs_operational": 2, 00:15:09.382 "base_bdevs_list": [ 00:15:09.382 { 00:15:09.382 "name": "pt1", 00:15:09.382 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:09.382 "is_configured": true, 00:15:09.382 "data_offset": 256, 00:15:09.382 "data_size": 7936 00:15:09.382 }, 00:15:09.382 { 00:15:09.382 "name": "pt2", 00:15:09.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.382 "is_configured": true, 00:15:09.382 "data_offset": 256, 00:15:09.382 "data_size": 7936 00:15:09.382 } 00:15:09.382 ] 00:15:09.382 }' 00:15:09.382 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.382 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.640 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:09.640 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:09.640 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:09.640 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:09.640 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:09.640 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:09.641 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:09.641 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:09.641 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.641 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.641 [2024-12-14 05:04:20.462753] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.641 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.641 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:09.641 "name": "raid_bdev1", 00:15:09.641 "aliases": [ 00:15:09.641 "b9ed2ef2-ade1-4af1-aa54-4adec0002a6d" 00:15:09.641 ], 00:15:09.641 "product_name": "Raid Volume", 00:15:09.641 "block_size": 4096, 00:15:09.641 "num_blocks": 7936, 00:15:09.641 "uuid": "b9ed2ef2-ade1-4af1-aa54-4adec0002a6d", 00:15:09.641 "assigned_rate_limits": { 00:15:09.641 "rw_ios_per_sec": 0, 00:15:09.641 "rw_mbytes_per_sec": 0, 00:15:09.641 "r_mbytes_per_sec": 0, 00:15:09.641 "w_mbytes_per_sec": 0 00:15:09.641 }, 00:15:09.641 "claimed": false, 00:15:09.641 "zoned": false, 00:15:09.641 "supported_io_types": { 00:15:09.641 "read": true, 00:15:09.641 "write": true, 00:15:09.641 "unmap": false, 00:15:09.641 "flush": false, 00:15:09.641 "reset": true, 00:15:09.641 "nvme_admin": false, 00:15:09.641 "nvme_io": false, 00:15:09.641 "nvme_io_md": false, 00:15:09.641 "write_zeroes": true, 00:15:09.641 "zcopy": false, 00:15:09.641 "get_zone_info": false, 00:15:09.641 "zone_management": false, 00:15:09.641 "zone_append": false, 00:15:09.641 "compare": false, 00:15:09.641 "compare_and_write": false, 00:15:09.641 "abort": false, 00:15:09.641 "seek_hole": false, 00:15:09.641 "seek_data": false, 00:15:09.641 "copy": false, 00:15:09.641 "nvme_iov_md": false 00:15:09.641 }, 00:15:09.641 "memory_domains": [ 00:15:09.641 { 00:15:09.641 "dma_device_id": "system", 00:15:09.641 "dma_device_type": 1 00:15:09.641 }, 00:15:09.641 { 00:15:09.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.641 "dma_device_type": 2 00:15:09.641 }, 00:15:09.641 { 00:15:09.641 "dma_device_id": "system", 00:15:09.641 "dma_device_type": 1 00:15:09.641 }, 00:15:09.641 { 00:15:09.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.641 "dma_device_type": 2 00:15:09.641 } 00:15:09.641 ], 00:15:09.641 "driver_specific": { 00:15:09.641 "raid": { 00:15:09.641 "uuid": "b9ed2ef2-ade1-4af1-aa54-4adec0002a6d", 00:15:09.641 "strip_size_kb": 0, 00:15:09.641 "state": "online", 00:15:09.641 "raid_level": "raid1", 00:15:09.641 "superblock": true, 00:15:09.641 "num_base_bdevs": 2, 00:15:09.641 "num_base_bdevs_discovered": 2, 00:15:09.641 "num_base_bdevs_operational": 2, 00:15:09.641 "base_bdevs_list": [ 00:15:09.641 { 00:15:09.641 "name": "pt1", 00:15:09.641 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:09.641 "is_configured": true, 00:15:09.641 "data_offset": 256, 00:15:09.641 "data_size": 7936 00:15:09.641 }, 00:15:09.641 { 00:15:09.641 "name": "pt2", 00:15:09.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.641 "is_configured": true, 00:15:09.641 "data_offset": 256, 00:15:09.641 "data_size": 7936 00:15:09.641 } 00:15:09.641 ] 00:15:09.641 } 00:15:09.641 } 00:15:09.641 }' 00:15:09.641 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:09.933 pt2' 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.933 [2024-12-14 05:04:20.694315] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b9ed2ef2-ade1-4af1-aa54-4adec0002a6d 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z b9ed2ef2-ade1-4af1-aa54-4adec0002a6d ']' 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.933 [2024-12-14 05:04:20.738019] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:09.933 [2024-12-14 05:04:20.738046] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.933 [2024-12-14 05:04:20.738110] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.933 [2024-12-14 05:04:20.738182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.933 [2024-12-14 05:04:20.738192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.933 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:10.210 [2024-12-14 05:04:20.877805] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:10.210 [2024-12-14 05:04:20.879596] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:10.210 [2024-12-14 05:04:20.879669] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:10.210 [2024-12-14 05:04:20.879716] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:10.210 [2024-12-14 05:04:20.879735] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:10.210 [2024-12-14 05:04:20.879744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:15:10.210 request: 00:15:10.210 { 00:15:10.210 "name": "raid_bdev1", 00:15:10.210 "raid_level": "raid1", 00:15:10.210 "base_bdevs": [ 00:15:10.210 "malloc1", 00:15:10.210 "malloc2" 00:15:10.210 ], 00:15:10.210 "superblock": false, 00:15:10.210 "method": "bdev_raid_create", 00:15:10.210 "req_id": 1 00:15:10.210 } 00:15:10.210 Got JSON-RPC error response 00:15:10.210 response: 00:15:10.210 { 00:15:10.210 "code": -17, 00:15:10.210 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:10.210 } 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:10.210 [2024-12-14 05:04:20.941666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:10.210 [2024-12-14 05:04:20.941709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.210 [2024-12-14 05:04:20.941725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:10.210 [2024-12-14 05:04:20.941734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.210 [2024-12-14 05:04:20.943723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.210 [2024-12-14 05:04:20.943758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:10.210 [2024-12-14 05:04:20.943815] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:10.210 [2024-12-14 05:04:20.943849] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:10.210 pt1 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.210 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.211 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.211 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.211 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:10.211 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.211 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.211 "name": "raid_bdev1", 00:15:10.211 "uuid": "b9ed2ef2-ade1-4af1-aa54-4adec0002a6d", 00:15:10.211 "strip_size_kb": 0, 00:15:10.211 "state": "configuring", 00:15:10.211 "raid_level": "raid1", 00:15:10.211 "superblock": true, 00:15:10.211 "num_base_bdevs": 2, 00:15:10.211 "num_base_bdevs_discovered": 1, 00:15:10.211 "num_base_bdevs_operational": 2, 00:15:10.211 "base_bdevs_list": [ 00:15:10.211 { 00:15:10.211 "name": "pt1", 00:15:10.211 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.211 "is_configured": true, 00:15:10.211 "data_offset": 256, 00:15:10.211 "data_size": 7936 00:15:10.211 }, 00:15:10.211 { 00:15:10.211 "name": null, 00:15:10.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.211 "is_configured": false, 00:15:10.211 "data_offset": 256, 00:15:10.211 "data_size": 7936 00:15:10.211 } 00:15:10.211 ] 00:15:10.211 }' 00:15:10.211 05:04:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.211 05:04:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:10.824 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:10.824 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:10.824 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:10.824 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:10.824 05:04:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.824 05:04:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:10.824 [2024-12-14 05:04:21.392938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:10.824 [2024-12-14 05:04:21.392985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.824 [2024-12-14 05:04:21.393005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:10.824 [2024-12-14 05:04:21.393013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.824 [2024-12-14 05:04:21.393438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.824 [2024-12-14 05:04:21.393492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:10.824 [2024-12-14 05:04:21.393581] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:10.824 [2024-12-14 05:04:21.393626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:10.824 [2024-12-14 05:04:21.393734] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:10.824 [2024-12-14 05:04:21.393774] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:10.824 [2024-12-14 05:04:21.394007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:10.824 [2024-12-14 05:04:21.394153] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:10.824 [2024-12-14 05:04:21.394211] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:15:10.824 [2024-12-14 05:04:21.394311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.824 pt2 00:15:10.824 05:04:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.824 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:10.824 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:10.824 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:10.825 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.825 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.825 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.825 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.825 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:10.825 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.825 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.825 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.825 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.825 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.825 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.825 05:04:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.825 05:04:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:10.825 05:04:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.825 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.825 "name": "raid_bdev1", 00:15:10.825 "uuid": "b9ed2ef2-ade1-4af1-aa54-4adec0002a6d", 00:15:10.825 "strip_size_kb": 0, 00:15:10.825 "state": "online", 00:15:10.825 "raid_level": "raid1", 00:15:10.825 "superblock": true, 00:15:10.825 "num_base_bdevs": 2, 00:15:10.825 "num_base_bdevs_discovered": 2, 00:15:10.825 "num_base_bdevs_operational": 2, 00:15:10.825 "base_bdevs_list": [ 00:15:10.825 { 00:15:10.825 "name": "pt1", 00:15:10.825 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.825 "is_configured": true, 00:15:10.825 "data_offset": 256, 00:15:10.825 "data_size": 7936 00:15:10.825 }, 00:15:10.825 { 00:15:10.825 "name": "pt2", 00:15:10.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.825 "is_configured": true, 00:15:10.825 "data_offset": 256, 00:15:10.825 "data_size": 7936 00:15:10.825 } 00:15:10.825 ] 00:15:10.825 }' 00:15:10.825 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.825 05:04:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.085 [2024-12-14 05:04:21.788492] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:11.085 "name": "raid_bdev1", 00:15:11.085 "aliases": [ 00:15:11.085 "b9ed2ef2-ade1-4af1-aa54-4adec0002a6d" 00:15:11.085 ], 00:15:11.085 "product_name": "Raid Volume", 00:15:11.085 "block_size": 4096, 00:15:11.085 "num_blocks": 7936, 00:15:11.085 "uuid": "b9ed2ef2-ade1-4af1-aa54-4adec0002a6d", 00:15:11.085 "assigned_rate_limits": { 00:15:11.085 "rw_ios_per_sec": 0, 00:15:11.085 "rw_mbytes_per_sec": 0, 00:15:11.085 "r_mbytes_per_sec": 0, 00:15:11.085 "w_mbytes_per_sec": 0 00:15:11.085 }, 00:15:11.085 "claimed": false, 00:15:11.085 "zoned": false, 00:15:11.085 "supported_io_types": { 00:15:11.085 "read": true, 00:15:11.085 "write": true, 00:15:11.085 "unmap": false, 00:15:11.085 "flush": false, 00:15:11.085 "reset": true, 00:15:11.085 "nvme_admin": false, 00:15:11.085 "nvme_io": false, 00:15:11.085 "nvme_io_md": false, 00:15:11.085 "write_zeroes": true, 00:15:11.085 "zcopy": false, 00:15:11.085 "get_zone_info": false, 00:15:11.085 "zone_management": false, 00:15:11.085 "zone_append": false, 00:15:11.085 "compare": false, 00:15:11.085 "compare_and_write": false, 00:15:11.085 "abort": false, 00:15:11.085 "seek_hole": false, 00:15:11.085 "seek_data": false, 00:15:11.085 "copy": false, 00:15:11.085 "nvme_iov_md": false 00:15:11.085 }, 00:15:11.085 "memory_domains": [ 00:15:11.085 { 00:15:11.085 "dma_device_id": "system", 00:15:11.085 "dma_device_type": 1 00:15:11.085 }, 00:15:11.085 { 00:15:11.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.085 "dma_device_type": 2 00:15:11.085 }, 00:15:11.085 { 00:15:11.085 "dma_device_id": "system", 00:15:11.085 "dma_device_type": 1 00:15:11.085 }, 00:15:11.085 { 00:15:11.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.085 "dma_device_type": 2 00:15:11.085 } 00:15:11.085 ], 00:15:11.085 "driver_specific": { 00:15:11.085 "raid": { 00:15:11.085 "uuid": "b9ed2ef2-ade1-4af1-aa54-4adec0002a6d", 00:15:11.085 "strip_size_kb": 0, 00:15:11.085 "state": "online", 00:15:11.085 "raid_level": "raid1", 00:15:11.085 "superblock": true, 00:15:11.085 "num_base_bdevs": 2, 00:15:11.085 "num_base_bdevs_discovered": 2, 00:15:11.085 "num_base_bdevs_operational": 2, 00:15:11.085 "base_bdevs_list": [ 00:15:11.085 { 00:15:11.085 "name": "pt1", 00:15:11.085 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:11.085 "is_configured": true, 00:15:11.085 "data_offset": 256, 00:15:11.085 "data_size": 7936 00:15:11.085 }, 00:15:11.085 { 00:15:11.085 "name": "pt2", 00:15:11.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.085 "is_configured": true, 00:15:11.085 "data_offset": 256, 00:15:11.085 "data_size": 7936 00:15:11.085 } 00:15:11.085 ] 00:15:11.085 } 00:15:11.085 } 00:15:11.085 }' 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:11.085 pt2' 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:11.085 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.345 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:11.345 05:04:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.345 05:04:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.345 05:04:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.345 05:04:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:11.345 [2024-12-14 05:04:22.020096] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' b9ed2ef2-ade1-4af1-aa54-4adec0002a6d '!=' b9ed2ef2-ade1-4af1-aa54-4adec0002a6d ']' 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.345 [2024-12-14 05:04:22.067803] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.345 "name": "raid_bdev1", 00:15:11.345 "uuid": "b9ed2ef2-ade1-4af1-aa54-4adec0002a6d", 00:15:11.345 "strip_size_kb": 0, 00:15:11.345 "state": "online", 00:15:11.345 "raid_level": "raid1", 00:15:11.345 "superblock": true, 00:15:11.345 "num_base_bdevs": 2, 00:15:11.345 "num_base_bdevs_discovered": 1, 00:15:11.345 "num_base_bdevs_operational": 1, 00:15:11.345 "base_bdevs_list": [ 00:15:11.345 { 00:15:11.345 "name": null, 00:15:11.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.345 "is_configured": false, 00:15:11.345 "data_offset": 0, 00:15:11.345 "data_size": 7936 00:15:11.345 }, 00:15:11.345 { 00:15:11.345 "name": "pt2", 00:15:11.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.345 "is_configured": true, 00:15:11.345 "data_offset": 256, 00:15:11.345 "data_size": 7936 00:15:11.345 } 00:15:11.345 ] 00:15:11.345 }' 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.345 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.915 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:11.915 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.915 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.915 [2024-12-14 05:04:22.510973] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:11.915 [2024-12-14 05:04:22.511039] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:11.915 [2024-12-14 05:04:22.511114] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.915 [2024-12-14 05:04:22.511174] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:11.915 [2024-12-14 05:04:22.511220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.916 [2024-12-14 05:04:22.566891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:11.916 [2024-12-14 05:04:22.566934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.916 [2024-12-14 05:04:22.566950] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:11.916 [2024-12-14 05:04:22.566958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.916 [2024-12-14 05:04:22.569016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.916 [2024-12-14 05:04:22.569085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:11.916 [2024-12-14 05:04:22.569152] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:11.916 [2024-12-14 05:04:22.569188] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:11.916 [2024-12-14 05:04:22.569253] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:11.916 [2024-12-14 05:04:22.569261] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:11.916 [2024-12-14 05:04:22.569450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:11.916 [2024-12-14 05:04:22.569551] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:11.916 [2024-12-14 05:04:22.569562] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:15:11.916 [2024-12-14 05:04:22.569644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.916 pt2 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.916 "name": "raid_bdev1", 00:15:11.916 "uuid": "b9ed2ef2-ade1-4af1-aa54-4adec0002a6d", 00:15:11.916 "strip_size_kb": 0, 00:15:11.916 "state": "online", 00:15:11.916 "raid_level": "raid1", 00:15:11.916 "superblock": true, 00:15:11.916 "num_base_bdevs": 2, 00:15:11.916 "num_base_bdevs_discovered": 1, 00:15:11.916 "num_base_bdevs_operational": 1, 00:15:11.916 "base_bdevs_list": [ 00:15:11.916 { 00:15:11.916 "name": null, 00:15:11.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.916 "is_configured": false, 00:15:11.916 "data_offset": 256, 00:15:11.916 "data_size": 7936 00:15:11.916 }, 00:15:11.916 { 00:15:11.916 "name": "pt2", 00:15:11.916 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.916 "is_configured": true, 00:15:11.916 "data_offset": 256, 00:15:11.916 "data_size": 7936 00:15:11.916 } 00:15:11.916 ] 00:15:11.916 }' 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.916 05:04:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.176 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:12.176 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.176 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.176 [2024-12-14 05:04:23.042074] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.176 [2024-12-14 05:04:23.042134] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.176 [2024-12-14 05:04:23.042206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.176 [2024-12-14 05:04:23.042255] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.176 [2024-12-14 05:04:23.042288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:15:12.176 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.176 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.176 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:12.176 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.176 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.436 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.436 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:12.436 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:12.436 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:12.436 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:12.436 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.436 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.436 [2024-12-14 05:04:23.105932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:12.436 [2024-12-14 05:04:23.106014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.436 [2024-12-14 05:04:23.106049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:12.436 [2024-12-14 05:04:23.106083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.436 [2024-12-14 05:04:23.108133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.436 [2024-12-14 05:04:23.108214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:12.436 [2024-12-14 05:04:23.108289] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:12.436 [2024-12-14 05:04:23.108343] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:12.436 [2024-12-14 05:04:23.108453] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:12.436 [2024-12-14 05:04:23.108529] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.436 [2024-12-14 05:04:23.108574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:15:12.436 [2024-12-14 05:04:23.108639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:12.436 [2024-12-14 05:04:23.108744] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:12.436 [2024-12-14 05:04:23.108784] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:12.436 [2024-12-14 05:04:23.108999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:12.437 [2024-12-14 05:04:23.109136] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:12.437 [2024-12-14 05:04:23.109183] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:12.437 [2024-12-14 05:04:23.109322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.437 pt1 00:15:12.437 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.437 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:12.437 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:12.437 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.437 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.437 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.437 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.437 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:12.437 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.437 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.437 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.437 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.437 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.437 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.437 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.437 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.437 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.437 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.437 "name": "raid_bdev1", 00:15:12.437 "uuid": "b9ed2ef2-ade1-4af1-aa54-4adec0002a6d", 00:15:12.437 "strip_size_kb": 0, 00:15:12.437 "state": "online", 00:15:12.437 "raid_level": "raid1", 00:15:12.437 "superblock": true, 00:15:12.437 "num_base_bdevs": 2, 00:15:12.437 "num_base_bdevs_discovered": 1, 00:15:12.437 "num_base_bdevs_operational": 1, 00:15:12.437 "base_bdevs_list": [ 00:15:12.437 { 00:15:12.437 "name": null, 00:15:12.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.437 "is_configured": false, 00:15:12.437 "data_offset": 256, 00:15:12.437 "data_size": 7936 00:15:12.437 }, 00:15:12.437 { 00:15:12.437 "name": "pt2", 00:15:12.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:12.437 "is_configured": true, 00:15:12.437 "data_offset": 256, 00:15:12.437 "data_size": 7936 00:15:12.437 } 00:15:12.437 ] 00:15:12.437 }' 00:15:12.437 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.437 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.697 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:12.697 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.697 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:12.697 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.697 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.697 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:12.697 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:12.697 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:12.697 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.697 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.697 [2024-12-14 05:04:23.573410] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.957 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.957 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' b9ed2ef2-ade1-4af1-aa54-4adec0002a6d '!=' b9ed2ef2-ade1-4af1-aa54-4adec0002a6d ']' 00:15:12.957 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 96568 00:15:12.957 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 96568 ']' 00:15:12.957 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 96568 00:15:12.957 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:15:12.957 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:12.957 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96568 00:15:12.957 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:12.957 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:12.957 killing process with pid 96568 00:15:12.957 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96568' 00:15:12.957 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 96568 00:15:12.957 [2024-12-14 05:04:23.655183] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:12.957 [2024-12-14 05:04:23.655239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.957 [2024-12-14 05:04:23.655273] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.957 [2024-12-14 05:04:23.655282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:12.957 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 96568 00:15:12.957 [2024-12-14 05:04:23.678577] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:13.217 05:04:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:15:13.217 00:15:13.217 real 0m4.938s 00:15:13.217 user 0m8.005s 00:15:13.217 sys 0m1.131s 00:15:13.217 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:13.217 05:04:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:13.217 ************************************ 00:15:13.217 END TEST raid_superblock_test_4k 00:15:13.217 ************************************ 00:15:13.217 05:04:23 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:15:13.217 05:04:23 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:15:13.217 05:04:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:13.217 05:04:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:13.217 05:04:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:13.217 ************************************ 00:15:13.217 START TEST raid_rebuild_test_sb_4k 00:15:13.217 ************************************ 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=96885 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 96885 00:15:13.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96885 ']' 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:13.217 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:13.477 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:13.477 Zero copy mechanism will not be used. 00:15:13.477 [2024-12-14 05:04:24.116028] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:13.477 [2024-12-14 05:04:24.116203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96885 ] 00:15:13.477 [2024-12-14 05:04:24.283276] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.477 [2024-12-14 05:04:24.331530] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.737 [2024-12-14 05:04:24.374546] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.737 [2024-12-14 05:04:24.374585] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.307 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:14.307 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:15:14.307 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:14.307 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:15:14.307 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.307 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.307 BaseBdev1_malloc 00:15:14.307 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.307 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:14.307 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.307 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.307 [2024-12-14 05:04:24.937267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:14.307 [2024-12-14 05:04:24.937328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.307 [2024-12-14 05:04:24.937355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:14.308 [2024-12-14 05:04:24.937369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.308 [2024-12-14 05:04:24.939317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.308 [2024-12-14 05:04:24.939400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:14.308 BaseBdev1 00:15:14.308 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.308 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:14.308 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:15:14.308 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.308 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.308 BaseBdev2_malloc 00:15:14.308 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.308 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:14.308 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.308 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.308 [2024-12-14 05:04:24.983214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:14.308 [2024-12-14 05:04:24.983356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.308 [2024-12-14 05:04:24.983414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:14.308 [2024-12-14 05:04:24.983442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.308 [2024-12-14 05:04:24.988018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.308 [2024-12-14 05:04:24.988082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:14.308 BaseBdev2 00:15:14.308 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.308 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:15:14.308 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.308 05:04:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.308 spare_malloc 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.308 spare_delay 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.308 [2024-12-14 05:04:25.026390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:14.308 [2024-12-14 05:04:25.026439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.308 [2024-12-14 05:04:25.026459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:14.308 [2024-12-14 05:04:25.026467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.308 [2024-12-14 05:04:25.028471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.308 [2024-12-14 05:04:25.028507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:14.308 spare 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.308 [2024-12-14 05:04:25.038411] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.308 [2024-12-14 05:04:25.040167] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:14.308 [2024-12-14 05:04:25.040329] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:14.308 [2024-12-14 05:04:25.040348] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:14.308 [2024-12-14 05:04:25.040593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:14.308 [2024-12-14 05:04:25.040733] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:14.308 [2024-12-14 05:04:25.040745] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:14.308 [2024-12-14 05:04:25.040867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.308 "name": "raid_bdev1", 00:15:14.308 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:14.308 "strip_size_kb": 0, 00:15:14.308 "state": "online", 00:15:14.308 "raid_level": "raid1", 00:15:14.308 "superblock": true, 00:15:14.308 "num_base_bdevs": 2, 00:15:14.308 "num_base_bdevs_discovered": 2, 00:15:14.308 "num_base_bdevs_operational": 2, 00:15:14.308 "base_bdevs_list": [ 00:15:14.308 { 00:15:14.308 "name": "BaseBdev1", 00:15:14.308 "uuid": "9fcb7946-98d6-5875-a6c2-35e15f486cc3", 00:15:14.308 "is_configured": true, 00:15:14.308 "data_offset": 256, 00:15:14.308 "data_size": 7936 00:15:14.308 }, 00:15:14.308 { 00:15:14.308 "name": "BaseBdev2", 00:15:14.308 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:14.308 "is_configured": true, 00:15:14.308 "data_offset": 256, 00:15:14.308 "data_size": 7936 00:15:14.308 } 00:15:14.308 ] 00:15:14.308 }' 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.308 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:14.877 [2024-12-14 05:04:25.525788] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:14.877 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:15.137 [2024-12-14 05:04:25.793225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:15.137 /dev/nbd0 00:15:15.137 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:15.137 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:15.137 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:15.137 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:15.137 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:15.137 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:15.137 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:15.137 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:15.137 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:15.137 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:15.137 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.137 1+0 records in 00:15:15.137 1+0 records out 00:15:15.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434851 s, 9.4 MB/s 00:15:15.137 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.137 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:15.137 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.137 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:15.137 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:15.137 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:15.137 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:15.137 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:15.137 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:15.137 05:04:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:15.707 7936+0 records in 00:15:15.707 7936+0 records out 00:15:15.707 32505856 bytes (33 MB, 31 MiB) copied, 0.610366 s, 53.3 MB/s 00:15:15.707 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:15.707 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:15.707 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:15.707 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:15.707 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:15.707 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:15.707 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:15.967 [2024-12-14 05:04:26.690168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:15.967 [2024-12-14 05:04:26.707603] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.967 "name": "raid_bdev1", 00:15:15.967 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:15.967 "strip_size_kb": 0, 00:15:15.967 "state": "online", 00:15:15.967 "raid_level": "raid1", 00:15:15.967 "superblock": true, 00:15:15.967 "num_base_bdevs": 2, 00:15:15.967 "num_base_bdevs_discovered": 1, 00:15:15.967 "num_base_bdevs_operational": 1, 00:15:15.967 "base_bdevs_list": [ 00:15:15.967 { 00:15:15.967 "name": null, 00:15:15.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.967 "is_configured": false, 00:15:15.967 "data_offset": 0, 00:15:15.967 "data_size": 7936 00:15:15.967 }, 00:15:15.967 { 00:15:15.967 "name": "BaseBdev2", 00:15:15.967 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:15.967 "is_configured": true, 00:15:15.967 "data_offset": 256, 00:15:15.967 "data_size": 7936 00:15:15.967 } 00:15:15.967 ] 00:15:15.967 }' 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.967 05:04:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:16.537 05:04:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:16.537 05:04:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.537 05:04:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:16.537 [2024-12-14 05:04:27.114959] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:16.537 [2024-12-14 05:04:27.119191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:15:16.537 05:04:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.537 05:04:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:16.537 [2024-12-14 05:04:27.121148] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.476 "name": "raid_bdev1", 00:15:17.476 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:17.476 "strip_size_kb": 0, 00:15:17.476 "state": "online", 00:15:17.476 "raid_level": "raid1", 00:15:17.476 "superblock": true, 00:15:17.476 "num_base_bdevs": 2, 00:15:17.476 "num_base_bdevs_discovered": 2, 00:15:17.476 "num_base_bdevs_operational": 2, 00:15:17.476 "process": { 00:15:17.476 "type": "rebuild", 00:15:17.476 "target": "spare", 00:15:17.476 "progress": { 00:15:17.476 "blocks": 2560, 00:15:17.476 "percent": 32 00:15:17.476 } 00:15:17.476 }, 00:15:17.476 "base_bdevs_list": [ 00:15:17.476 { 00:15:17.476 "name": "spare", 00:15:17.476 "uuid": "4448541d-94cb-59ed-87df-3d4c42987eb0", 00:15:17.476 "is_configured": true, 00:15:17.476 "data_offset": 256, 00:15:17.476 "data_size": 7936 00:15:17.476 }, 00:15:17.476 { 00:15:17.476 "name": "BaseBdev2", 00:15:17.476 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:17.476 "is_configured": true, 00:15:17.476 "data_offset": 256, 00:15:17.476 "data_size": 7936 00:15:17.476 } 00:15:17.476 ] 00:15:17.476 }' 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.476 [2024-12-14 05:04:28.285790] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.476 [2024-12-14 05:04:28.325717] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:17.476 [2024-12-14 05:04:28.325812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.476 [2024-12-14 05:04:28.325848] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.476 [2024-12-14 05:04:28.325869] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.476 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.736 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.736 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.736 "name": "raid_bdev1", 00:15:17.736 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:17.736 "strip_size_kb": 0, 00:15:17.736 "state": "online", 00:15:17.736 "raid_level": "raid1", 00:15:17.736 "superblock": true, 00:15:17.736 "num_base_bdevs": 2, 00:15:17.736 "num_base_bdevs_discovered": 1, 00:15:17.736 "num_base_bdevs_operational": 1, 00:15:17.736 "base_bdevs_list": [ 00:15:17.736 { 00:15:17.736 "name": null, 00:15:17.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.736 "is_configured": false, 00:15:17.736 "data_offset": 0, 00:15:17.736 "data_size": 7936 00:15:17.736 }, 00:15:17.736 { 00:15:17.736 "name": "BaseBdev2", 00:15:17.736 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:17.736 "is_configured": true, 00:15:17.736 "data_offset": 256, 00:15:17.736 "data_size": 7936 00:15:17.736 } 00:15:17.736 ] 00:15:17.736 }' 00:15:17.736 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.736 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.996 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:17.996 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.996 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:17.996 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:17.996 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.996 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.996 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.996 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.996 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.996 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.996 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.996 "name": "raid_bdev1", 00:15:17.996 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:17.996 "strip_size_kb": 0, 00:15:17.996 "state": "online", 00:15:17.996 "raid_level": "raid1", 00:15:17.996 "superblock": true, 00:15:17.996 "num_base_bdevs": 2, 00:15:17.996 "num_base_bdevs_discovered": 1, 00:15:17.996 "num_base_bdevs_operational": 1, 00:15:17.996 "base_bdevs_list": [ 00:15:17.996 { 00:15:17.996 "name": null, 00:15:17.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.996 "is_configured": false, 00:15:17.996 "data_offset": 0, 00:15:17.996 "data_size": 7936 00:15:17.996 }, 00:15:17.996 { 00:15:17.996 "name": "BaseBdev2", 00:15:17.996 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:17.996 "is_configured": true, 00:15:17.996 "data_offset": 256, 00:15:17.996 "data_size": 7936 00:15:17.996 } 00:15:17.996 ] 00:15:17.996 }' 00:15:17.996 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.256 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:18.256 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.256 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:18.256 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:18.256 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.256 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.256 [2024-12-14 05:04:28.945305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:18.256 [2024-12-14 05:04:28.949331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:15:18.256 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.256 05:04:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:18.256 [2024-12-14 05:04:28.951199] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:19.194 05:04:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.194 05:04:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.194 05:04:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.195 05:04:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.195 05:04:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.195 05:04:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.195 05:04:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.195 05:04:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.195 05:04:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.195 05:04:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.195 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.195 "name": "raid_bdev1", 00:15:19.195 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:19.195 "strip_size_kb": 0, 00:15:19.195 "state": "online", 00:15:19.195 "raid_level": "raid1", 00:15:19.195 "superblock": true, 00:15:19.195 "num_base_bdevs": 2, 00:15:19.195 "num_base_bdevs_discovered": 2, 00:15:19.195 "num_base_bdevs_operational": 2, 00:15:19.195 "process": { 00:15:19.195 "type": "rebuild", 00:15:19.195 "target": "spare", 00:15:19.195 "progress": { 00:15:19.195 "blocks": 2560, 00:15:19.195 "percent": 32 00:15:19.195 } 00:15:19.195 }, 00:15:19.195 "base_bdevs_list": [ 00:15:19.195 { 00:15:19.195 "name": "spare", 00:15:19.195 "uuid": "4448541d-94cb-59ed-87df-3d4c42987eb0", 00:15:19.195 "is_configured": true, 00:15:19.195 "data_offset": 256, 00:15:19.195 "data_size": 7936 00:15:19.195 }, 00:15:19.195 { 00:15:19.195 "name": "BaseBdev2", 00:15:19.195 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:19.195 "is_configured": true, 00:15:19.195 "data_offset": 256, 00:15:19.195 "data_size": 7936 00:15:19.195 } 00:15:19.195 ] 00:15:19.195 }' 00:15:19.195 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.195 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.195 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.454 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.454 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:19.454 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:19.454 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:19.454 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:19.454 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:19.454 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:19.454 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=561 00:15:19.454 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:19.454 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.454 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.454 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.454 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.454 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.454 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.454 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.454 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.454 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.454 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.455 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.455 "name": "raid_bdev1", 00:15:19.455 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:19.455 "strip_size_kb": 0, 00:15:19.455 "state": "online", 00:15:19.455 "raid_level": "raid1", 00:15:19.455 "superblock": true, 00:15:19.455 "num_base_bdevs": 2, 00:15:19.455 "num_base_bdevs_discovered": 2, 00:15:19.455 "num_base_bdevs_operational": 2, 00:15:19.455 "process": { 00:15:19.455 "type": "rebuild", 00:15:19.455 "target": "spare", 00:15:19.455 "progress": { 00:15:19.455 "blocks": 2816, 00:15:19.455 "percent": 35 00:15:19.455 } 00:15:19.455 }, 00:15:19.455 "base_bdevs_list": [ 00:15:19.455 { 00:15:19.455 "name": "spare", 00:15:19.455 "uuid": "4448541d-94cb-59ed-87df-3d4c42987eb0", 00:15:19.455 "is_configured": true, 00:15:19.455 "data_offset": 256, 00:15:19.455 "data_size": 7936 00:15:19.455 }, 00:15:19.455 { 00:15:19.455 "name": "BaseBdev2", 00:15:19.455 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:19.455 "is_configured": true, 00:15:19.455 "data_offset": 256, 00:15:19.455 "data_size": 7936 00:15:19.455 } 00:15:19.455 ] 00:15:19.455 }' 00:15:19.455 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.455 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.455 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.455 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.455 05:04:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:20.393 05:04:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:20.393 05:04:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.393 05:04:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.393 05:04:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.393 05:04:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.393 05:04:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.393 05:04:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.393 05:04:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.393 05:04:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.393 05:04:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.393 05:04:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.653 05:04:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.653 "name": "raid_bdev1", 00:15:20.653 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:20.653 "strip_size_kb": 0, 00:15:20.653 "state": "online", 00:15:20.653 "raid_level": "raid1", 00:15:20.653 "superblock": true, 00:15:20.653 "num_base_bdevs": 2, 00:15:20.653 "num_base_bdevs_discovered": 2, 00:15:20.653 "num_base_bdevs_operational": 2, 00:15:20.653 "process": { 00:15:20.653 "type": "rebuild", 00:15:20.653 "target": "spare", 00:15:20.653 "progress": { 00:15:20.653 "blocks": 5632, 00:15:20.653 "percent": 70 00:15:20.653 } 00:15:20.653 }, 00:15:20.653 "base_bdevs_list": [ 00:15:20.653 { 00:15:20.653 "name": "spare", 00:15:20.653 "uuid": "4448541d-94cb-59ed-87df-3d4c42987eb0", 00:15:20.653 "is_configured": true, 00:15:20.653 "data_offset": 256, 00:15:20.653 "data_size": 7936 00:15:20.653 }, 00:15:20.653 { 00:15:20.653 "name": "BaseBdev2", 00:15:20.653 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:20.653 "is_configured": true, 00:15:20.653 "data_offset": 256, 00:15:20.653 "data_size": 7936 00:15:20.653 } 00:15:20.653 ] 00:15:20.653 }' 00:15:20.653 05:04:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.653 05:04:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:20.653 05:04:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.653 05:04:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.653 05:04:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:21.222 [2024-12-14 05:04:32.061484] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:21.223 [2024-12-14 05:04:32.061557] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:21.223 [2024-12-14 05:04:32.061651] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.793 "name": "raid_bdev1", 00:15:21.793 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:21.793 "strip_size_kb": 0, 00:15:21.793 "state": "online", 00:15:21.793 "raid_level": "raid1", 00:15:21.793 "superblock": true, 00:15:21.793 "num_base_bdevs": 2, 00:15:21.793 "num_base_bdevs_discovered": 2, 00:15:21.793 "num_base_bdevs_operational": 2, 00:15:21.793 "base_bdevs_list": [ 00:15:21.793 { 00:15:21.793 "name": "spare", 00:15:21.793 "uuid": "4448541d-94cb-59ed-87df-3d4c42987eb0", 00:15:21.793 "is_configured": true, 00:15:21.793 "data_offset": 256, 00:15:21.793 "data_size": 7936 00:15:21.793 }, 00:15:21.793 { 00:15:21.793 "name": "BaseBdev2", 00:15:21.793 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:21.793 "is_configured": true, 00:15:21.793 "data_offset": 256, 00:15:21.793 "data_size": 7936 00:15:21.793 } 00:15:21.793 ] 00:15:21.793 }' 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.793 "name": "raid_bdev1", 00:15:21.793 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:21.793 "strip_size_kb": 0, 00:15:21.793 "state": "online", 00:15:21.793 "raid_level": "raid1", 00:15:21.793 "superblock": true, 00:15:21.793 "num_base_bdevs": 2, 00:15:21.793 "num_base_bdevs_discovered": 2, 00:15:21.793 "num_base_bdevs_operational": 2, 00:15:21.793 "base_bdevs_list": [ 00:15:21.793 { 00:15:21.793 "name": "spare", 00:15:21.793 "uuid": "4448541d-94cb-59ed-87df-3d4c42987eb0", 00:15:21.793 "is_configured": true, 00:15:21.793 "data_offset": 256, 00:15:21.793 "data_size": 7936 00:15:21.793 }, 00:15:21.793 { 00:15:21.793 "name": "BaseBdev2", 00:15:21.793 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:21.793 "is_configured": true, 00:15:21.793 "data_offset": 256, 00:15:21.793 "data_size": 7936 00:15:21.793 } 00:15:21.793 ] 00:15:21.793 }' 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:21.793 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.053 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:22.053 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:22.053 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.053 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.053 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.053 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.053 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:22.053 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.053 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.053 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.053 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.053 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.053 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.053 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.053 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.053 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.053 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.053 "name": "raid_bdev1", 00:15:22.053 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:22.053 "strip_size_kb": 0, 00:15:22.053 "state": "online", 00:15:22.053 "raid_level": "raid1", 00:15:22.053 "superblock": true, 00:15:22.053 "num_base_bdevs": 2, 00:15:22.053 "num_base_bdevs_discovered": 2, 00:15:22.053 "num_base_bdevs_operational": 2, 00:15:22.053 "base_bdevs_list": [ 00:15:22.053 { 00:15:22.053 "name": "spare", 00:15:22.053 "uuid": "4448541d-94cb-59ed-87df-3d4c42987eb0", 00:15:22.053 "is_configured": true, 00:15:22.053 "data_offset": 256, 00:15:22.053 "data_size": 7936 00:15:22.053 }, 00:15:22.053 { 00:15:22.053 "name": "BaseBdev2", 00:15:22.053 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:22.053 "is_configured": true, 00:15:22.053 "data_offset": 256, 00:15:22.053 "data_size": 7936 00:15:22.053 } 00:15:22.053 ] 00:15:22.053 }' 00:15:22.053 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.053 05:04:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.313 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:22.313 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.313 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.313 [2024-12-14 05:04:33.096078] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:22.313 [2024-12-14 05:04:33.096148] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.313 [2024-12-14 05:04:33.096277] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.313 [2024-12-14 05:04:33.096376] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.313 [2024-12-14 05:04:33.096439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:22.313 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.313 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.313 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.313 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:15:22.313 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.313 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.313 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:22.313 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:22.313 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:22.313 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:22.313 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.313 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:22.313 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:22.313 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:22.313 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:22.313 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:22.313 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:22.313 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:22.313 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:22.573 /dev/nbd0 00:15:22.573 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:22.573 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:22.573 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:22.573 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:22.573 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:22.573 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:22.573 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:22.573 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:22.573 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:22.573 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:22.573 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:22.573 1+0 records in 00:15:22.573 1+0 records out 00:15:22.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451524 s, 9.1 MB/s 00:15:22.573 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.573 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:22.573 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.573 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:22.573 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:22.573 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:22.573 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:22.573 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:22.833 /dev/nbd1 00:15:22.833 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:22.833 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:22.833 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:22.833 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:22.833 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:22.833 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:22.833 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:22.833 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:22.833 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:22.833 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:22.833 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:22.833 1+0 records in 00:15:22.833 1+0 records out 00:15:22.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460703 s, 8.9 MB/s 00:15:22.833 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.833 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:22.833 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.833 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:22.833 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:22.833 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:22.833 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:22.833 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:23.093 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:23.093 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:23.093 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:23.093 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:23.093 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:23.093 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:23.093 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:23.093 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:23.093 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:23.093 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:23.093 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.093 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.093 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:23.093 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:23.093 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.093 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:23.093 05:04:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:23.353 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:23.353 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:23.353 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:23.353 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.353 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.353 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:23.353 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:23.353 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.353 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:23.353 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:23.353 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.353 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.353 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.353 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:23.353 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.353 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.353 [2024-12-14 05:04:34.180457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:23.353 [2024-12-14 05:04:34.180511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.353 [2024-12-14 05:04:34.180530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:23.353 [2024-12-14 05:04:34.180543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.353 [2024-12-14 05:04:34.182676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.353 [2024-12-14 05:04:34.182753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:23.353 [2024-12-14 05:04:34.182854] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:23.353 [2024-12-14 05:04:34.182928] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:23.353 [2024-12-14 05:04:34.183086] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.353 spare 00:15:23.353 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.353 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:23.353 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.353 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.613 [2024-12-14 05:04:34.283032] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:23.613 [2024-12-14 05:04:34.283055] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:23.613 [2024-12-14 05:04:34.283329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:15:23.613 [2024-12-14 05:04:34.283471] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:23.613 [2024-12-14 05:04:34.283483] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:15:23.613 [2024-12-14 05:04:34.283616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.613 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.613 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:23.613 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.613 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.613 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.613 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.613 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.613 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.613 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.613 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.613 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.613 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.613 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.613 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.613 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.613 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.613 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.613 "name": "raid_bdev1", 00:15:23.613 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:23.613 "strip_size_kb": 0, 00:15:23.613 "state": "online", 00:15:23.613 "raid_level": "raid1", 00:15:23.613 "superblock": true, 00:15:23.613 "num_base_bdevs": 2, 00:15:23.613 "num_base_bdevs_discovered": 2, 00:15:23.613 "num_base_bdevs_operational": 2, 00:15:23.613 "base_bdevs_list": [ 00:15:23.613 { 00:15:23.613 "name": "spare", 00:15:23.613 "uuid": "4448541d-94cb-59ed-87df-3d4c42987eb0", 00:15:23.613 "is_configured": true, 00:15:23.613 "data_offset": 256, 00:15:23.613 "data_size": 7936 00:15:23.613 }, 00:15:23.613 { 00:15:23.614 "name": "BaseBdev2", 00:15:23.614 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:23.614 "is_configured": true, 00:15:23.614 "data_offset": 256, 00:15:23.614 "data_size": 7936 00:15:23.614 } 00:15:23.614 ] 00:15:23.614 }' 00:15:23.614 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.614 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.873 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:23.873 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.873 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:23.873 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:23.873 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.873 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.873 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.873 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.873 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.873 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.873 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.873 "name": "raid_bdev1", 00:15:23.873 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:23.873 "strip_size_kb": 0, 00:15:23.873 "state": "online", 00:15:23.873 "raid_level": "raid1", 00:15:23.873 "superblock": true, 00:15:23.873 "num_base_bdevs": 2, 00:15:23.873 "num_base_bdevs_discovered": 2, 00:15:23.873 "num_base_bdevs_operational": 2, 00:15:23.873 "base_bdevs_list": [ 00:15:23.873 { 00:15:23.873 "name": "spare", 00:15:23.873 "uuid": "4448541d-94cb-59ed-87df-3d4c42987eb0", 00:15:23.873 "is_configured": true, 00:15:23.873 "data_offset": 256, 00:15:23.873 "data_size": 7936 00:15:23.873 }, 00:15:23.873 { 00:15:23.873 "name": "BaseBdev2", 00:15:23.873 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:23.873 "is_configured": true, 00:15:23.873 "data_offset": 256, 00:15:23.873 "data_size": 7936 00:15:23.873 } 00:15:23.873 ] 00:15:23.873 }' 00:15:23.873 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.133 [2024-12-14 05:04:34.883292] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.133 "name": "raid_bdev1", 00:15:24.133 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:24.133 "strip_size_kb": 0, 00:15:24.133 "state": "online", 00:15:24.133 "raid_level": "raid1", 00:15:24.133 "superblock": true, 00:15:24.133 "num_base_bdevs": 2, 00:15:24.133 "num_base_bdevs_discovered": 1, 00:15:24.133 "num_base_bdevs_operational": 1, 00:15:24.133 "base_bdevs_list": [ 00:15:24.133 { 00:15:24.133 "name": null, 00:15:24.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.133 "is_configured": false, 00:15:24.133 "data_offset": 0, 00:15:24.133 "data_size": 7936 00:15:24.133 }, 00:15:24.133 { 00:15:24.133 "name": "BaseBdev2", 00:15:24.133 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:24.133 "is_configured": true, 00:15:24.133 "data_offset": 256, 00:15:24.133 "data_size": 7936 00:15:24.133 } 00:15:24.133 ] 00:15:24.133 }' 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.133 05:04:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.705 05:04:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:24.705 05:04:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.705 05:04:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.705 [2024-12-14 05:04:35.342496] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.705 [2024-12-14 05:04:35.342694] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:24.705 [2024-12-14 05:04:35.342764] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:24.705 [2024-12-14 05:04:35.342842] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.705 [2024-12-14 05:04:35.346827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:15:24.705 05:04:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.705 05:04:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:24.705 [2024-12-14 05:04:35.348684] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:25.642 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.642 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.642 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.642 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.642 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.642 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.642 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.642 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.642 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.642 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.642 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.642 "name": "raid_bdev1", 00:15:25.642 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:25.642 "strip_size_kb": 0, 00:15:25.642 "state": "online", 00:15:25.642 "raid_level": "raid1", 00:15:25.642 "superblock": true, 00:15:25.642 "num_base_bdevs": 2, 00:15:25.642 "num_base_bdevs_discovered": 2, 00:15:25.642 "num_base_bdevs_operational": 2, 00:15:25.642 "process": { 00:15:25.642 "type": "rebuild", 00:15:25.642 "target": "spare", 00:15:25.642 "progress": { 00:15:25.642 "blocks": 2560, 00:15:25.642 "percent": 32 00:15:25.642 } 00:15:25.642 }, 00:15:25.642 "base_bdevs_list": [ 00:15:25.642 { 00:15:25.642 "name": "spare", 00:15:25.642 "uuid": "4448541d-94cb-59ed-87df-3d4c42987eb0", 00:15:25.642 "is_configured": true, 00:15:25.642 "data_offset": 256, 00:15:25.642 "data_size": 7936 00:15:25.642 }, 00:15:25.642 { 00:15:25.642 "name": "BaseBdev2", 00:15:25.642 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:25.642 "is_configured": true, 00:15:25.642 "data_offset": 256, 00:15:25.642 "data_size": 7936 00:15:25.642 } 00:15:25.642 ] 00:15:25.642 }' 00:15:25.642 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.642 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.642 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.642 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.642 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:25.642 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.642 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.642 [2024-12-14 05:04:36.489549] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.901 [2024-12-14 05:04:36.552647] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:25.901 [2024-12-14 05:04:36.552698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.901 [2024-12-14 05:04:36.552715] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.901 [2024-12-14 05:04:36.552721] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:25.901 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.901 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:25.901 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.901 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.901 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.901 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.901 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:25.901 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.901 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.901 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.901 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.901 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.901 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.901 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.901 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.901 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.901 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.901 "name": "raid_bdev1", 00:15:25.901 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:25.901 "strip_size_kb": 0, 00:15:25.901 "state": "online", 00:15:25.901 "raid_level": "raid1", 00:15:25.901 "superblock": true, 00:15:25.901 "num_base_bdevs": 2, 00:15:25.901 "num_base_bdevs_discovered": 1, 00:15:25.901 "num_base_bdevs_operational": 1, 00:15:25.901 "base_bdevs_list": [ 00:15:25.901 { 00:15:25.901 "name": null, 00:15:25.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.901 "is_configured": false, 00:15:25.901 "data_offset": 0, 00:15:25.901 "data_size": 7936 00:15:25.901 }, 00:15:25.901 { 00:15:25.901 "name": "BaseBdev2", 00:15:25.901 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:25.901 "is_configured": true, 00:15:25.901 "data_offset": 256, 00:15:25.901 "data_size": 7936 00:15:25.901 } 00:15:25.901 ] 00:15:25.901 }' 00:15:25.901 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.901 05:04:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.160 05:04:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:26.160 05:04:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.160 05:04:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.160 [2024-12-14 05:04:37.039917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:26.160 [2024-12-14 05:04:37.039976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.160 [2024-12-14 05:04:37.039999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:26.160 [2024-12-14 05:04:37.040009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.160 [2024-12-14 05:04:37.040444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.161 [2024-12-14 05:04:37.040462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:26.161 [2024-12-14 05:04:37.040540] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:26.161 [2024-12-14 05:04:37.040552] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:26.161 [2024-12-14 05:04:37.040568] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:26.161 [2024-12-14 05:04:37.040588] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:26.419 spare 00:15:26.419 [2024-12-14 05:04:37.044482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:15:26.419 05:04:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.419 05:04:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:26.419 [2024-12-14 05:04:37.046359] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:27.356 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.356 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.356 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.356 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.356 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.356 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.357 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.357 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.357 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.357 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.357 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.357 "name": "raid_bdev1", 00:15:27.357 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:27.357 "strip_size_kb": 0, 00:15:27.357 "state": "online", 00:15:27.357 "raid_level": "raid1", 00:15:27.357 "superblock": true, 00:15:27.357 "num_base_bdevs": 2, 00:15:27.357 "num_base_bdevs_discovered": 2, 00:15:27.357 "num_base_bdevs_operational": 2, 00:15:27.357 "process": { 00:15:27.357 "type": "rebuild", 00:15:27.357 "target": "spare", 00:15:27.357 "progress": { 00:15:27.357 "blocks": 2560, 00:15:27.357 "percent": 32 00:15:27.357 } 00:15:27.357 }, 00:15:27.357 "base_bdevs_list": [ 00:15:27.357 { 00:15:27.357 "name": "spare", 00:15:27.357 "uuid": "4448541d-94cb-59ed-87df-3d4c42987eb0", 00:15:27.357 "is_configured": true, 00:15:27.357 "data_offset": 256, 00:15:27.357 "data_size": 7936 00:15:27.357 }, 00:15:27.357 { 00:15:27.357 "name": "BaseBdev2", 00:15:27.357 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:27.357 "is_configured": true, 00:15:27.357 "data_offset": 256, 00:15:27.357 "data_size": 7936 00:15:27.357 } 00:15:27.357 ] 00:15:27.357 }' 00:15:27.357 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.357 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.357 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.357 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.357 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:27.357 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.357 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.357 [2024-12-14 05:04:38.211379] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:27.617 [2024-12-14 05:04:38.250335] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:27.617 [2024-12-14 05:04:38.250393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.617 [2024-12-14 05:04:38.250408] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:27.617 [2024-12-14 05:04:38.250416] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:27.617 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.617 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:27.617 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.617 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.617 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.617 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.617 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:27.617 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.617 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.617 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.617 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.617 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.617 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.617 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.617 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.617 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.617 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.617 "name": "raid_bdev1", 00:15:27.617 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:27.617 "strip_size_kb": 0, 00:15:27.617 "state": "online", 00:15:27.617 "raid_level": "raid1", 00:15:27.617 "superblock": true, 00:15:27.617 "num_base_bdevs": 2, 00:15:27.617 "num_base_bdevs_discovered": 1, 00:15:27.617 "num_base_bdevs_operational": 1, 00:15:27.617 "base_bdevs_list": [ 00:15:27.617 { 00:15:27.617 "name": null, 00:15:27.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.617 "is_configured": false, 00:15:27.617 "data_offset": 0, 00:15:27.617 "data_size": 7936 00:15:27.617 }, 00:15:27.617 { 00:15:27.617 "name": "BaseBdev2", 00:15:27.617 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:27.617 "is_configured": true, 00:15:27.617 "data_offset": 256, 00:15:27.617 "data_size": 7936 00:15:27.617 } 00:15:27.617 ] 00:15:27.617 }' 00:15:27.617 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.617 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.876 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:27.876 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.876 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:27.876 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:27.877 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.877 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.877 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.877 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.877 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.136 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.136 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.136 "name": "raid_bdev1", 00:15:28.136 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:28.136 "strip_size_kb": 0, 00:15:28.136 "state": "online", 00:15:28.136 "raid_level": "raid1", 00:15:28.136 "superblock": true, 00:15:28.136 "num_base_bdevs": 2, 00:15:28.136 "num_base_bdevs_discovered": 1, 00:15:28.136 "num_base_bdevs_operational": 1, 00:15:28.136 "base_bdevs_list": [ 00:15:28.136 { 00:15:28.136 "name": null, 00:15:28.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.136 "is_configured": false, 00:15:28.136 "data_offset": 0, 00:15:28.136 "data_size": 7936 00:15:28.136 }, 00:15:28.136 { 00:15:28.136 "name": "BaseBdev2", 00:15:28.136 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:28.136 "is_configured": true, 00:15:28.136 "data_offset": 256, 00:15:28.136 "data_size": 7936 00:15:28.136 } 00:15:28.136 ] 00:15:28.136 }' 00:15:28.136 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.136 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:28.136 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.136 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:28.136 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:28.136 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.136 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.136 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.136 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:28.136 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.136 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.136 [2024-12-14 05:04:38.897487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:28.136 [2024-12-14 05:04:38.897540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.136 [2024-12-14 05:04:38.897559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:28.136 [2024-12-14 05:04:38.897570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.136 [2024-12-14 05:04:38.897937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.136 [2024-12-14 05:04:38.897958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:28.136 [2024-12-14 05:04:38.898020] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:28.136 [2024-12-14 05:04:38.898038] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:28.136 [2024-12-14 05:04:38.898045] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:28.136 [2024-12-14 05:04:38.898059] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:28.136 BaseBdev1 00:15:28.137 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.137 05:04:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:29.075 05:04:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:29.075 05:04:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.075 05:04:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.075 05:04:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.075 05:04:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.075 05:04:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:29.075 05:04:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.075 05:04:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.075 05:04:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.075 05:04:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.075 05:04:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.075 05:04:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.075 05:04:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.075 05:04:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.075 05:04:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.334 05:04:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.334 "name": "raid_bdev1", 00:15:29.334 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:29.334 "strip_size_kb": 0, 00:15:29.334 "state": "online", 00:15:29.334 "raid_level": "raid1", 00:15:29.334 "superblock": true, 00:15:29.334 "num_base_bdevs": 2, 00:15:29.334 "num_base_bdevs_discovered": 1, 00:15:29.334 "num_base_bdevs_operational": 1, 00:15:29.334 "base_bdevs_list": [ 00:15:29.334 { 00:15:29.334 "name": null, 00:15:29.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.334 "is_configured": false, 00:15:29.334 "data_offset": 0, 00:15:29.334 "data_size": 7936 00:15:29.334 }, 00:15:29.334 { 00:15:29.334 "name": "BaseBdev2", 00:15:29.334 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:29.334 "is_configured": true, 00:15:29.334 "data_offset": 256, 00:15:29.334 "data_size": 7936 00:15:29.334 } 00:15:29.334 ] 00:15:29.334 }' 00:15:29.334 05:04:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.334 05:04:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.593 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:29.593 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.593 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:29.593 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:29.593 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.593 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.593 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.593 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.593 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.593 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.593 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.593 "name": "raid_bdev1", 00:15:29.593 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:29.593 "strip_size_kb": 0, 00:15:29.593 "state": "online", 00:15:29.593 "raid_level": "raid1", 00:15:29.593 "superblock": true, 00:15:29.593 "num_base_bdevs": 2, 00:15:29.593 "num_base_bdevs_discovered": 1, 00:15:29.593 "num_base_bdevs_operational": 1, 00:15:29.593 "base_bdevs_list": [ 00:15:29.593 { 00:15:29.593 "name": null, 00:15:29.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.593 "is_configured": false, 00:15:29.593 "data_offset": 0, 00:15:29.593 "data_size": 7936 00:15:29.593 }, 00:15:29.593 { 00:15:29.593 "name": "BaseBdev2", 00:15:29.593 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:29.593 "is_configured": true, 00:15:29.593 "data_offset": 256, 00:15:29.593 "data_size": 7936 00:15:29.593 } 00:15:29.593 ] 00:15:29.593 }' 00:15:29.593 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.593 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.593 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.852 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:29.852 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:29.852 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:15:29.852 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:29.852 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:29.852 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.852 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:29.852 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.852 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:29.852 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.852 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.852 [2024-12-14 05:04:40.502882] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.852 [2024-12-14 05:04:40.503041] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:29.852 [2024-12-14 05:04:40.503053] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:29.852 request: 00:15:29.852 { 00:15:29.852 "base_bdev": "BaseBdev1", 00:15:29.852 "raid_bdev": "raid_bdev1", 00:15:29.852 "method": "bdev_raid_add_base_bdev", 00:15:29.852 "req_id": 1 00:15:29.852 } 00:15:29.852 Got JSON-RPC error response 00:15:29.852 response: 00:15:29.852 { 00:15:29.852 "code": -22, 00:15:29.852 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:29.852 } 00:15:29.852 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:29.852 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:15:29.853 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:29.853 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:29.853 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:29.853 05:04:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:30.790 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:30.790 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.790 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.790 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.790 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.790 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:30.790 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.790 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.790 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.790 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.790 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.790 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.790 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.790 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.790 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.790 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.790 "name": "raid_bdev1", 00:15:30.790 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:30.790 "strip_size_kb": 0, 00:15:30.790 "state": "online", 00:15:30.790 "raid_level": "raid1", 00:15:30.790 "superblock": true, 00:15:30.790 "num_base_bdevs": 2, 00:15:30.790 "num_base_bdevs_discovered": 1, 00:15:30.790 "num_base_bdevs_operational": 1, 00:15:30.790 "base_bdevs_list": [ 00:15:30.790 { 00:15:30.790 "name": null, 00:15:30.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.790 "is_configured": false, 00:15:30.790 "data_offset": 0, 00:15:30.790 "data_size": 7936 00:15:30.790 }, 00:15:30.790 { 00:15:30.790 "name": "BaseBdev2", 00:15:30.790 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:30.790 "is_configured": true, 00:15:30.790 "data_offset": 256, 00:15:30.790 "data_size": 7936 00:15:30.790 } 00:15:30.790 ] 00:15:30.790 }' 00:15:30.790 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.790 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.083 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:31.083 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.083 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:31.083 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:31.083 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.343 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.343 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.343 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.343 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.343 05:04:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.343 05:04:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.343 "name": "raid_bdev1", 00:15:31.343 "uuid": "14cd9617-16f3-4dc0-867c-a460bd65f8fe", 00:15:31.343 "strip_size_kb": 0, 00:15:31.343 "state": "online", 00:15:31.343 "raid_level": "raid1", 00:15:31.343 "superblock": true, 00:15:31.343 "num_base_bdevs": 2, 00:15:31.343 "num_base_bdevs_discovered": 1, 00:15:31.343 "num_base_bdevs_operational": 1, 00:15:31.343 "base_bdevs_list": [ 00:15:31.343 { 00:15:31.343 "name": null, 00:15:31.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.343 "is_configured": false, 00:15:31.343 "data_offset": 0, 00:15:31.343 "data_size": 7936 00:15:31.343 }, 00:15:31.343 { 00:15:31.343 "name": "BaseBdev2", 00:15:31.343 "uuid": "491823c3-f02f-5b32-a1f2-9ba1002d58ad", 00:15:31.343 "is_configured": true, 00:15:31.343 "data_offset": 256, 00:15:31.343 "data_size": 7936 00:15:31.343 } 00:15:31.343 ] 00:15:31.343 }' 00:15:31.343 05:04:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.343 05:04:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:31.343 05:04:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.343 05:04:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:31.343 05:04:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 96885 00:15:31.343 05:04:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96885 ']' 00:15:31.343 05:04:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96885 00:15:31.343 05:04:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:15:31.343 05:04:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:31.343 05:04:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96885 00:15:31.343 05:04:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:31.343 killing process with pid 96885 00:15:31.343 Received shutdown signal, test time was about 60.000000 seconds 00:15:31.343 00:15:31.343 Latency(us) 00:15:31.343 [2024-12-14T05:04:42.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.343 [2024-12-14T05:04:42.226Z] =================================================================================================================== 00:15:31.343 [2024-12-14T05:04:42.226Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:31.343 05:04:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:31.343 05:04:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96885' 00:15:31.343 05:04:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96885 00:15:31.343 [2024-12-14 05:04:42.144570] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.343 [2024-12-14 05:04:42.144710] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.343 05:04:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96885 00:15:31.343 [2024-12-14 05:04:42.144761] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.343 [2024-12-14 05:04:42.144770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:31.343 [2024-12-14 05:04:42.176319] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:31.603 05:04:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:15:31.603 00:15:31.603 real 0m18.400s 00:15:31.603 user 0m24.369s 00:15:31.603 sys 0m2.752s 00:15:31.603 05:04:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:31.603 ************************************ 00:15:31.603 END TEST raid_rebuild_test_sb_4k 00:15:31.603 ************************************ 00:15:31.603 05:04:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.603 05:04:42 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:15:31.603 05:04:42 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:15:31.603 05:04:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:31.603 05:04:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:31.603 05:04:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:31.863 ************************************ 00:15:31.863 START TEST raid_state_function_test_sb_md_separate 00:15:31.863 ************************************ 00:15:31.863 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:31.863 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:31.863 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:31.863 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:31.863 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:31.863 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:31.863 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:31.863 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:31.863 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:31.863 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:31.863 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:31.863 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:31.863 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:31.863 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:31.863 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:31.863 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:31.863 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:31.863 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:31.863 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:31.863 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:31.863 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:31.864 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:31.864 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:31.864 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=97559 00:15:31.864 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:31.864 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97559' 00:15:31.864 Process raid pid: 97559 00:15:31.864 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 97559 00:15:31.864 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97559 ']' 00:15:31.864 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.864 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:31.864 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.864 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:31.864 05:04:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.864 [2024-12-14 05:04:42.590541] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:31.864 [2024-12-14 05:04:42.590758] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.124 [2024-12-14 05:04:42.757474] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.124 [2024-12-14 05:04:42.804082] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.124 [2024-12-14 05:04:42.846899] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:32.124 [2024-12-14 05:04:42.846934] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.693 [2024-12-14 05:04:43.404580] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:32.693 [2024-12-14 05:04:43.404640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:32.693 [2024-12-14 05:04:43.404652] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:32.693 [2024-12-14 05:04:43.404662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.693 "name": "Existed_Raid", 00:15:32.693 "uuid": "0a08d530-3990-46a7-9b31-8e8d9adc40af", 00:15:32.693 "strip_size_kb": 0, 00:15:32.693 "state": "configuring", 00:15:32.693 "raid_level": "raid1", 00:15:32.693 "superblock": true, 00:15:32.693 "num_base_bdevs": 2, 00:15:32.693 "num_base_bdevs_discovered": 0, 00:15:32.693 "num_base_bdevs_operational": 2, 00:15:32.693 "base_bdevs_list": [ 00:15:32.693 { 00:15:32.693 "name": "BaseBdev1", 00:15:32.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.693 "is_configured": false, 00:15:32.693 "data_offset": 0, 00:15:32.693 "data_size": 0 00:15:32.693 }, 00:15:32.693 { 00:15:32.693 "name": "BaseBdev2", 00:15:32.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.693 "is_configured": false, 00:15:32.693 "data_offset": 0, 00:15:32.693 "data_size": 0 00:15:32.693 } 00:15:32.693 ] 00:15:32.693 }' 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.693 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.953 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:32.953 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.953 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.953 [2024-12-14 05:04:43.807853] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:32.953 [2024-12-14 05:04:43.807901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:15:32.953 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.953 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:32.953 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.953 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.953 [2024-12-14 05:04:43.815878] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:32.953 [2024-12-14 05:04:43.815954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:32.953 [2024-12-14 05:04:43.815981] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:32.953 [2024-12-14 05:04:43.816005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:32.953 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.953 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:15:32.953 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.953 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.953 [2024-12-14 05:04:43.833549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.953 BaseBdev1 00:15:33.213 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.213 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:33.213 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:33.213 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:33.213 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:15:33.213 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:33.213 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:33.213 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:33.213 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.213 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.213 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.213 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:33.213 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.213 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.213 [ 00:15:33.213 { 00:15:33.213 "name": "BaseBdev1", 00:15:33.213 "aliases": [ 00:15:33.213 "6ddd4de6-0269-452b-991b-a7a7d4fd714a" 00:15:33.213 ], 00:15:33.213 "product_name": "Malloc disk", 00:15:33.213 "block_size": 4096, 00:15:33.213 "num_blocks": 8192, 00:15:33.213 "uuid": "6ddd4de6-0269-452b-991b-a7a7d4fd714a", 00:15:33.213 "md_size": 32, 00:15:33.213 "md_interleave": false, 00:15:33.213 "dif_type": 0, 00:15:33.213 "assigned_rate_limits": { 00:15:33.213 "rw_ios_per_sec": 0, 00:15:33.213 "rw_mbytes_per_sec": 0, 00:15:33.213 "r_mbytes_per_sec": 0, 00:15:33.213 "w_mbytes_per_sec": 0 00:15:33.213 }, 00:15:33.214 "claimed": true, 00:15:33.214 "claim_type": "exclusive_write", 00:15:33.214 "zoned": false, 00:15:33.214 "supported_io_types": { 00:15:33.214 "read": true, 00:15:33.214 "write": true, 00:15:33.214 "unmap": true, 00:15:33.214 "flush": true, 00:15:33.214 "reset": true, 00:15:33.214 "nvme_admin": false, 00:15:33.214 "nvme_io": false, 00:15:33.214 "nvme_io_md": false, 00:15:33.214 "write_zeroes": true, 00:15:33.214 "zcopy": true, 00:15:33.214 "get_zone_info": false, 00:15:33.214 "zone_management": false, 00:15:33.214 "zone_append": false, 00:15:33.214 "compare": false, 00:15:33.214 "compare_and_write": false, 00:15:33.214 "abort": true, 00:15:33.214 "seek_hole": false, 00:15:33.214 "seek_data": false, 00:15:33.214 "copy": true, 00:15:33.214 "nvme_iov_md": false 00:15:33.214 }, 00:15:33.214 "memory_domains": [ 00:15:33.214 { 00:15:33.214 "dma_device_id": "system", 00:15:33.214 "dma_device_type": 1 00:15:33.214 }, 00:15:33.214 { 00:15:33.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.214 "dma_device_type": 2 00:15:33.214 } 00:15:33.214 ], 00:15:33.214 "driver_specific": {} 00:15:33.214 } 00:15:33.214 ] 00:15:33.214 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.214 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:15:33.214 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:33.214 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.214 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.214 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.214 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.214 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:33.214 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.214 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.214 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.214 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.214 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.214 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.214 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.214 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.214 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.214 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.214 "name": "Existed_Raid", 00:15:33.214 "uuid": "44cc87a9-76d8-4ae5-9326-c77fb7922308", 00:15:33.214 "strip_size_kb": 0, 00:15:33.214 "state": "configuring", 00:15:33.214 "raid_level": "raid1", 00:15:33.214 "superblock": true, 00:15:33.214 "num_base_bdevs": 2, 00:15:33.214 "num_base_bdevs_discovered": 1, 00:15:33.214 "num_base_bdevs_operational": 2, 00:15:33.214 "base_bdevs_list": [ 00:15:33.214 { 00:15:33.214 "name": "BaseBdev1", 00:15:33.214 "uuid": "6ddd4de6-0269-452b-991b-a7a7d4fd714a", 00:15:33.214 "is_configured": true, 00:15:33.214 "data_offset": 256, 00:15:33.214 "data_size": 7936 00:15:33.214 }, 00:15:33.214 { 00:15:33.214 "name": "BaseBdev2", 00:15:33.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.214 "is_configured": false, 00:15:33.214 "data_offset": 0, 00:15:33.214 "data_size": 0 00:15:33.214 } 00:15:33.214 ] 00:15:33.214 }' 00:15:33.214 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.214 05:04:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.474 [2024-12-14 05:04:44.316772] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:33.474 [2024-12-14 05:04:44.316848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.474 [2024-12-14 05:04:44.328808] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.474 [2024-12-14 05:04:44.330538] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.474 [2024-12-14 05:04:44.330582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.474 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.733 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.733 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.733 "name": "Existed_Raid", 00:15:33.733 "uuid": "83803c79-7fbb-47ad-b7ce-f5120442122d", 00:15:33.733 "strip_size_kb": 0, 00:15:33.733 "state": "configuring", 00:15:33.733 "raid_level": "raid1", 00:15:33.733 "superblock": true, 00:15:33.733 "num_base_bdevs": 2, 00:15:33.733 "num_base_bdevs_discovered": 1, 00:15:33.733 "num_base_bdevs_operational": 2, 00:15:33.733 "base_bdevs_list": [ 00:15:33.733 { 00:15:33.733 "name": "BaseBdev1", 00:15:33.733 "uuid": "6ddd4de6-0269-452b-991b-a7a7d4fd714a", 00:15:33.733 "is_configured": true, 00:15:33.733 "data_offset": 256, 00:15:33.733 "data_size": 7936 00:15:33.733 }, 00:15:33.733 { 00:15:33.733 "name": "BaseBdev2", 00:15:33.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.733 "is_configured": false, 00:15:33.733 "data_offset": 0, 00:15:33.733 "data_size": 0 00:15:33.733 } 00:15:33.733 ] 00:15:33.733 }' 00:15:33.733 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.733 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.993 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:15:33.993 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.993 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.993 [2024-12-14 05:04:44.800236] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.993 [2024-12-14 05:04:44.800929] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:33.993 [2024-12-14 05:04:44.801122] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:33.993 BaseBdev2 00:15:33.993 [2024-12-14 05:04:44.801630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:33.993 [2024-12-14 05:04:44.801940] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:33.993 [2024-12-14 05:04:44.802113] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:15:33.993 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.993 [2024-12-14 05:04:44.802592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.993 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:33.993 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:33.993 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.994 [ 00:15:33.994 { 00:15:33.994 "name": "BaseBdev2", 00:15:33.994 "aliases": [ 00:15:33.994 "f0c50093-f9d5-4978-9776-f52424f458ca" 00:15:33.994 ], 00:15:33.994 "product_name": "Malloc disk", 00:15:33.994 "block_size": 4096, 00:15:33.994 "num_blocks": 8192, 00:15:33.994 "uuid": "f0c50093-f9d5-4978-9776-f52424f458ca", 00:15:33.994 "md_size": 32, 00:15:33.994 "md_interleave": false, 00:15:33.994 "dif_type": 0, 00:15:33.994 "assigned_rate_limits": { 00:15:33.994 "rw_ios_per_sec": 0, 00:15:33.994 "rw_mbytes_per_sec": 0, 00:15:33.994 "r_mbytes_per_sec": 0, 00:15:33.994 "w_mbytes_per_sec": 0 00:15:33.994 }, 00:15:33.994 "claimed": true, 00:15:33.994 "claim_type": "exclusive_write", 00:15:33.994 "zoned": false, 00:15:33.994 "supported_io_types": { 00:15:33.994 "read": true, 00:15:33.994 "write": true, 00:15:33.994 "unmap": true, 00:15:33.994 "flush": true, 00:15:33.994 "reset": true, 00:15:33.994 "nvme_admin": false, 00:15:33.994 "nvme_io": false, 00:15:33.994 "nvme_io_md": false, 00:15:33.994 "write_zeroes": true, 00:15:33.994 "zcopy": true, 00:15:33.994 "get_zone_info": false, 00:15:33.994 "zone_management": false, 00:15:33.994 "zone_append": false, 00:15:33.994 "compare": false, 00:15:33.994 "compare_and_write": false, 00:15:33.994 "abort": true, 00:15:33.994 "seek_hole": false, 00:15:33.994 "seek_data": false, 00:15:33.994 "copy": true, 00:15:33.994 "nvme_iov_md": false 00:15:33.994 }, 00:15:33.994 "memory_domains": [ 00:15:33.994 { 00:15:33.994 "dma_device_id": "system", 00:15:33.994 "dma_device_type": 1 00:15:33.994 }, 00:15:33.994 { 00:15:33.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.994 "dma_device_type": 2 00:15:33.994 } 00:15:33.994 ], 00:15:33.994 "driver_specific": {} 00:15:33.994 } 00:15:33.994 ] 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.994 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.254 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.254 "name": "Existed_Raid", 00:15:34.254 "uuid": "83803c79-7fbb-47ad-b7ce-f5120442122d", 00:15:34.254 "strip_size_kb": 0, 00:15:34.254 "state": "online", 00:15:34.254 "raid_level": "raid1", 00:15:34.254 "superblock": true, 00:15:34.254 "num_base_bdevs": 2, 00:15:34.254 "num_base_bdevs_discovered": 2, 00:15:34.254 "num_base_bdevs_operational": 2, 00:15:34.254 "base_bdevs_list": [ 00:15:34.254 { 00:15:34.254 "name": "BaseBdev1", 00:15:34.254 "uuid": "6ddd4de6-0269-452b-991b-a7a7d4fd714a", 00:15:34.254 "is_configured": true, 00:15:34.254 "data_offset": 256, 00:15:34.254 "data_size": 7936 00:15:34.254 }, 00:15:34.254 { 00:15:34.254 "name": "BaseBdev2", 00:15:34.254 "uuid": "f0c50093-f9d5-4978-9776-f52424f458ca", 00:15:34.254 "is_configured": true, 00:15:34.254 "data_offset": 256, 00:15:34.254 "data_size": 7936 00:15:34.254 } 00:15:34.254 ] 00:15:34.254 }' 00:15:34.254 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.254 05:04:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.517 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:34.517 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:34.517 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:34.517 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:34.517 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:34.517 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:34.517 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:34.517 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:34.517 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.517 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.517 [2024-12-14 05:04:45.287691] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.517 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.517 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:34.517 "name": "Existed_Raid", 00:15:34.517 "aliases": [ 00:15:34.517 "83803c79-7fbb-47ad-b7ce-f5120442122d" 00:15:34.517 ], 00:15:34.517 "product_name": "Raid Volume", 00:15:34.517 "block_size": 4096, 00:15:34.517 "num_blocks": 7936, 00:15:34.517 "uuid": "83803c79-7fbb-47ad-b7ce-f5120442122d", 00:15:34.517 "md_size": 32, 00:15:34.517 "md_interleave": false, 00:15:34.517 "dif_type": 0, 00:15:34.517 "assigned_rate_limits": { 00:15:34.517 "rw_ios_per_sec": 0, 00:15:34.517 "rw_mbytes_per_sec": 0, 00:15:34.517 "r_mbytes_per_sec": 0, 00:15:34.517 "w_mbytes_per_sec": 0 00:15:34.518 }, 00:15:34.518 "claimed": false, 00:15:34.518 "zoned": false, 00:15:34.518 "supported_io_types": { 00:15:34.518 "read": true, 00:15:34.518 "write": true, 00:15:34.518 "unmap": false, 00:15:34.518 "flush": false, 00:15:34.518 "reset": true, 00:15:34.518 "nvme_admin": false, 00:15:34.518 "nvme_io": false, 00:15:34.518 "nvme_io_md": false, 00:15:34.518 "write_zeroes": true, 00:15:34.518 "zcopy": false, 00:15:34.518 "get_zone_info": false, 00:15:34.518 "zone_management": false, 00:15:34.518 "zone_append": false, 00:15:34.518 "compare": false, 00:15:34.518 "compare_and_write": false, 00:15:34.518 "abort": false, 00:15:34.518 "seek_hole": false, 00:15:34.518 "seek_data": false, 00:15:34.518 "copy": false, 00:15:34.518 "nvme_iov_md": false 00:15:34.518 }, 00:15:34.518 "memory_domains": [ 00:15:34.518 { 00:15:34.518 "dma_device_id": "system", 00:15:34.518 "dma_device_type": 1 00:15:34.518 }, 00:15:34.518 { 00:15:34.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.518 "dma_device_type": 2 00:15:34.518 }, 00:15:34.518 { 00:15:34.518 "dma_device_id": "system", 00:15:34.518 "dma_device_type": 1 00:15:34.518 }, 00:15:34.518 { 00:15:34.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.518 "dma_device_type": 2 00:15:34.518 } 00:15:34.518 ], 00:15:34.518 "driver_specific": { 00:15:34.518 "raid": { 00:15:34.518 "uuid": "83803c79-7fbb-47ad-b7ce-f5120442122d", 00:15:34.518 "strip_size_kb": 0, 00:15:34.518 "state": "online", 00:15:34.518 "raid_level": "raid1", 00:15:34.518 "superblock": true, 00:15:34.518 "num_base_bdevs": 2, 00:15:34.518 "num_base_bdevs_discovered": 2, 00:15:34.518 "num_base_bdevs_operational": 2, 00:15:34.518 "base_bdevs_list": [ 00:15:34.518 { 00:15:34.518 "name": "BaseBdev1", 00:15:34.518 "uuid": "6ddd4de6-0269-452b-991b-a7a7d4fd714a", 00:15:34.518 "is_configured": true, 00:15:34.518 "data_offset": 256, 00:15:34.518 "data_size": 7936 00:15:34.518 }, 00:15:34.518 { 00:15:34.518 "name": "BaseBdev2", 00:15:34.518 "uuid": "f0c50093-f9d5-4978-9776-f52424f458ca", 00:15:34.518 "is_configured": true, 00:15:34.518 "data_offset": 256, 00:15:34.518 "data_size": 7936 00:15:34.518 } 00:15:34.518 ] 00:15:34.518 } 00:15:34.518 } 00:15:34.518 }' 00:15:34.518 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:34.518 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:34.518 BaseBdev2' 00:15:34.518 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.779 [2024-12-14 05:04:45.515106] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.779 "name": "Existed_Raid", 00:15:34.779 "uuid": "83803c79-7fbb-47ad-b7ce-f5120442122d", 00:15:34.779 "strip_size_kb": 0, 00:15:34.779 "state": "online", 00:15:34.779 "raid_level": "raid1", 00:15:34.779 "superblock": true, 00:15:34.779 "num_base_bdevs": 2, 00:15:34.779 "num_base_bdevs_discovered": 1, 00:15:34.779 "num_base_bdevs_operational": 1, 00:15:34.779 "base_bdevs_list": [ 00:15:34.779 { 00:15:34.779 "name": null, 00:15:34.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.779 "is_configured": false, 00:15:34.779 "data_offset": 0, 00:15:34.779 "data_size": 7936 00:15:34.779 }, 00:15:34.779 { 00:15:34.779 "name": "BaseBdev2", 00:15:34.779 "uuid": "f0c50093-f9d5-4978-9776-f52424f458ca", 00:15:34.779 "is_configured": true, 00:15:34.779 "data_offset": 256, 00:15:34.779 "data_size": 7936 00:15:34.779 } 00:15:34.779 ] 00:15:34.779 }' 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.779 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.348 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:35.348 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:35.348 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.348 05:04:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:35.348 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.348 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.348 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.348 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:35.348 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:35.348 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:35.348 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.348 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.348 [2024-12-14 05:04:46.050451] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:35.348 [2024-12-14 05:04:46.050597] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.348 [2024-12-14 05:04:46.062934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.348 [2024-12-14 05:04:46.062989] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.349 [2024-12-14 05:04:46.063002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:15:35.349 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.349 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:35.349 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:35.349 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.349 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.349 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.349 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:35.349 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.349 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:35.349 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:35.349 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:35.349 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 97559 00:15:35.349 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97559 ']' 00:15:35.349 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 97559 00:15:35.349 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:15:35.349 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:35.349 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97559 00:15:35.349 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:35.349 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:35.349 killing process with pid 97559 00:15:35.349 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97559' 00:15:35.349 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 97559 00:15:35.349 [2024-12-14 05:04:46.156706] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:35.349 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 97559 00:15:35.349 [2024-12-14 05:04:46.157677] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:35.609 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:15:35.609 00:15:35.609 real 0m3.921s 00:15:35.609 user 0m6.048s 00:15:35.609 sys 0m0.928s 00:15:35.609 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:35.609 05:04:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.609 ************************************ 00:15:35.609 END TEST raid_state_function_test_sb_md_separate 00:15:35.609 ************************************ 00:15:35.609 05:04:46 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:15:35.609 05:04:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:35.609 05:04:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:35.609 05:04:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:35.609 ************************************ 00:15:35.609 START TEST raid_superblock_test_md_separate 00:15:35.609 ************************************ 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=97800 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 97800 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97800 ']' 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:35.868 05:04:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.868 [2024-12-14 05:04:46.588544] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:35.869 [2024-12-14 05:04:46.588823] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97800 ] 00:15:36.128 [2024-12-14 05:04:46.754914] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.128 [2024-12-14 05:04:46.801621] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.128 [2024-12-14 05:04:46.844413] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.128 [2024-12-14 05:04:46.844525] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.703 malloc1 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.703 [2024-12-14 05:04:47.447482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:36.703 [2024-12-14 05:04:47.447579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.703 [2024-12-14 05:04:47.447612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:36.703 [2024-12-14 05:04:47.447641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.703 [2024-12-14 05:04:47.449596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.703 [2024-12-14 05:04:47.449668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:36.703 pt1 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.703 malloc2 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.703 [2024-12-14 05:04:47.498472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:36.703 [2024-12-14 05:04:47.498676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.703 [2024-12-14 05:04:47.498757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:36.703 [2024-12-14 05:04:47.498846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.703 [2024-12-14 05:04:47.503295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.703 [2024-12-14 05:04:47.503456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:36.703 pt2 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.703 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.703 [2024-12-14 05:04:47.511773] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:36.703 [2024-12-14 05:04:47.514755] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:36.703 [2024-12-14 05:04:47.515026] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:36.703 [2024-12-14 05:04:47.515104] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:36.704 [2024-12-14 05:04:47.515288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:36.704 [2024-12-14 05:04:47.515524] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:36.704 [2024-12-14 05:04:47.515594] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:36.704 [2024-12-14 05:04:47.515821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.704 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.704 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:36.704 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.704 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.704 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.704 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.704 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.704 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.704 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.704 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.704 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.704 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.704 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.704 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.704 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.704 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.704 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.704 "name": "raid_bdev1", 00:15:36.704 "uuid": "0eb9127e-4a26-43a9-9af1-f26a2fc1bd81", 00:15:36.704 "strip_size_kb": 0, 00:15:36.704 "state": "online", 00:15:36.704 "raid_level": "raid1", 00:15:36.704 "superblock": true, 00:15:36.704 "num_base_bdevs": 2, 00:15:36.704 "num_base_bdevs_discovered": 2, 00:15:36.704 "num_base_bdevs_operational": 2, 00:15:36.704 "base_bdevs_list": [ 00:15:36.704 { 00:15:36.704 "name": "pt1", 00:15:36.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:36.704 "is_configured": true, 00:15:36.704 "data_offset": 256, 00:15:36.704 "data_size": 7936 00:15:36.704 }, 00:15:36.704 { 00:15:36.704 "name": "pt2", 00:15:36.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.704 "is_configured": true, 00:15:36.704 "data_offset": 256, 00:15:36.704 "data_size": 7936 00:15:36.704 } 00:15:36.704 ] 00:15:36.704 }' 00:15:36.704 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.704 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.308 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:37.308 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:37.308 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:37.308 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:37.308 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:37.308 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:37.308 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:37.308 05:04:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:37.308 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.308 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.308 [2024-12-14 05:04:47.963470] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.308 05:04:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.308 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:37.308 "name": "raid_bdev1", 00:15:37.308 "aliases": [ 00:15:37.308 "0eb9127e-4a26-43a9-9af1-f26a2fc1bd81" 00:15:37.308 ], 00:15:37.308 "product_name": "Raid Volume", 00:15:37.308 "block_size": 4096, 00:15:37.308 "num_blocks": 7936, 00:15:37.308 "uuid": "0eb9127e-4a26-43a9-9af1-f26a2fc1bd81", 00:15:37.308 "md_size": 32, 00:15:37.308 "md_interleave": false, 00:15:37.308 "dif_type": 0, 00:15:37.308 "assigned_rate_limits": { 00:15:37.308 "rw_ios_per_sec": 0, 00:15:37.308 "rw_mbytes_per_sec": 0, 00:15:37.308 "r_mbytes_per_sec": 0, 00:15:37.308 "w_mbytes_per_sec": 0 00:15:37.308 }, 00:15:37.308 "claimed": false, 00:15:37.308 "zoned": false, 00:15:37.308 "supported_io_types": { 00:15:37.308 "read": true, 00:15:37.308 "write": true, 00:15:37.308 "unmap": false, 00:15:37.308 "flush": false, 00:15:37.308 "reset": true, 00:15:37.308 "nvme_admin": false, 00:15:37.308 "nvme_io": false, 00:15:37.308 "nvme_io_md": false, 00:15:37.308 "write_zeroes": true, 00:15:37.308 "zcopy": false, 00:15:37.308 "get_zone_info": false, 00:15:37.308 "zone_management": false, 00:15:37.308 "zone_append": false, 00:15:37.308 "compare": false, 00:15:37.308 "compare_and_write": false, 00:15:37.308 "abort": false, 00:15:37.308 "seek_hole": false, 00:15:37.308 "seek_data": false, 00:15:37.308 "copy": false, 00:15:37.308 "nvme_iov_md": false 00:15:37.308 }, 00:15:37.308 "memory_domains": [ 00:15:37.308 { 00:15:37.308 "dma_device_id": "system", 00:15:37.308 "dma_device_type": 1 00:15:37.308 }, 00:15:37.308 { 00:15:37.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.308 "dma_device_type": 2 00:15:37.308 }, 00:15:37.308 { 00:15:37.308 "dma_device_id": "system", 00:15:37.308 "dma_device_type": 1 00:15:37.309 }, 00:15:37.309 { 00:15:37.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.309 "dma_device_type": 2 00:15:37.309 } 00:15:37.309 ], 00:15:37.309 "driver_specific": { 00:15:37.309 "raid": { 00:15:37.309 "uuid": "0eb9127e-4a26-43a9-9af1-f26a2fc1bd81", 00:15:37.309 "strip_size_kb": 0, 00:15:37.309 "state": "online", 00:15:37.309 "raid_level": "raid1", 00:15:37.309 "superblock": true, 00:15:37.309 "num_base_bdevs": 2, 00:15:37.309 "num_base_bdevs_discovered": 2, 00:15:37.309 "num_base_bdevs_operational": 2, 00:15:37.309 "base_bdevs_list": [ 00:15:37.309 { 00:15:37.309 "name": "pt1", 00:15:37.309 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:37.309 "is_configured": true, 00:15:37.309 "data_offset": 256, 00:15:37.309 "data_size": 7936 00:15:37.309 }, 00:15:37.309 { 00:15:37.309 "name": "pt2", 00:15:37.309 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.309 "is_configured": true, 00:15:37.309 "data_offset": 256, 00:15:37.309 "data_size": 7936 00:15:37.309 } 00:15:37.309 ] 00:15:37.309 } 00:15:37.309 } 00:15:37.309 }' 00:15:37.309 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:37.309 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:37.309 pt2' 00:15:37.309 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.309 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:37.309 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.309 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:37.309 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.309 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.309 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.309 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.309 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:37.309 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:37.309 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.309 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:37.309 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.309 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.309 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.309 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:37.569 [2024-12-14 05:04:48.210873] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0eb9127e-4a26-43a9-9af1-f26a2fc1bd81 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 0eb9127e-4a26-43a9-9af1-f26a2fc1bd81 ']' 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.569 [2024-12-14 05:04:48.262591] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.569 [2024-12-14 05:04:48.262655] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.569 [2024-12-14 05:04:48.262750] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.569 [2024-12-14 05:04:48.262828] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.569 [2024-12-14 05:04:48.262874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.569 [2024-12-14 05:04:48.398380] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:37.569 [2024-12-14 05:04:48.400251] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:37.569 [2024-12-14 05:04:48.400347] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:37.569 [2024-12-14 05:04:48.400435] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:37.569 [2024-12-14 05:04:48.400492] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.569 [2024-12-14 05:04:48.400529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:15:37.569 request: 00:15:37.569 { 00:15:37.569 "name": "raid_bdev1", 00:15:37.569 "raid_level": "raid1", 00:15:37.569 "base_bdevs": [ 00:15:37.569 "malloc1", 00:15:37.569 "malloc2" 00:15:37.569 ], 00:15:37.569 "superblock": false, 00:15:37.569 "method": "bdev_raid_create", 00:15:37.569 "req_id": 1 00:15:37.569 } 00:15:37.569 Got JSON-RPC error response 00:15:37.569 response: 00:15:37.569 { 00:15:37.569 "code": -17, 00:15:37.569 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:37.569 } 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:37.569 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.829 [2024-12-14 05:04:48.470217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:37.829 [2024-12-14 05:04:48.470294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.829 [2024-12-14 05:04:48.470315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:37.829 [2024-12-14 05:04:48.470323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.829 [2024-12-14 05:04:48.472211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.829 [2024-12-14 05:04:48.472244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:37.829 [2024-12-14 05:04:48.472285] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:37.829 [2024-12-14 05:04:48.472313] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:37.829 pt1 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.829 "name": "raid_bdev1", 00:15:37.829 "uuid": "0eb9127e-4a26-43a9-9af1-f26a2fc1bd81", 00:15:37.829 "strip_size_kb": 0, 00:15:37.829 "state": "configuring", 00:15:37.829 "raid_level": "raid1", 00:15:37.829 "superblock": true, 00:15:37.829 "num_base_bdevs": 2, 00:15:37.829 "num_base_bdevs_discovered": 1, 00:15:37.829 "num_base_bdevs_operational": 2, 00:15:37.829 "base_bdevs_list": [ 00:15:37.829 { 00:15:37.829 "name": "pt1", 00:15:37.829 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:37.829 "is_configured": true, 00:15:37.829 "data_offset": 256, 00:15:37.829 "data_size": 7936 00:15:37.829 }, 00:15:37.829 { 00:15:37.829 "name": null, 00:15:37.829 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.829 "is_configured": false, 00:15:37.829 "data_offset": 256, 00:15:37.829 "data_size": 7936 00:15:37.829 } 00:15:37.829 ] 00:15:37.829 }' 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.829 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.089 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:38.089 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:38.089 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:38.089 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:38.089 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.089 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.089 [2024-12-14 05:04:48.957369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:38.089 [2024-12-14 05:04:48.957457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.089 [2024-12-14 05:04:48.957494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:38.089 [2024-12-14 05:04:48.957521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.089 [2024-12-14 05:04:48.957688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.089 [2024-12-14 05:04:48.957732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:38.089 [2024-12-14 05:04:48.957795] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:38.089 [2024-12-14 05:04:48.957833] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:38.089 [2024-12-14 05:04:48.957929] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:38.089 [2024-12-14 05:04:48.957961] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:38.089 [2024-12-14 05:04:48.958043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:38.089 [2024-12-14 05:04:48.958146] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:38.089 [2024-12-14 05:04:48.958205] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:15:38.089 [2024-12-14 05:04:48.958299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.089 pt2 00:15:38.089 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.089 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:38.089 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:38.089 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:38.089 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.089 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.089 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.089 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.089 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:38.089 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.089 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.089 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.089 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.349 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.349 05:04:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.349 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.349 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.349 05:04:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.349 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.349 "name": "raid_bdev1", 00:15:38.349 "uuid": "0eb9127e-4a26-43a9-9af1-f26a2fc1bd81", 00:15:38.349 "strip_size_kb": 0, 00:15:38.349 "state": "online", 00:15:38.349 "raid_level": "raid1", 00:15:38.349 "superblock": true, 00:15:38.349 "num_base_bdevs": 2, 00:15:38.349 "num_base_bdevs_discovered": 2, 00:15:38.349 "num_base_bdevs_operational": 2, 00:15:38.349 "base_bdevs_list": [ 00:15:38.349 { 00:15:38.349 "name": "pt1", 00:15:38.349 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:38.349 "is_configured": true, 00:15:38.349 "data_offset": 256, 00:15:38.349 "data_size": 7936 00:15:38.349 }, 00:15:38.349 { 00:15:38.349 "name": "pt2", 00:15:38.349 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.349 "is_configured": true, 00:15:38.349 "data_offset": 256, 00:15:38.349 "data_size": 7936 00:15:38.349 } 00:15:38.349 ] 00:15:38.349 }' 00:15:38.349 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.349 05:04:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.608 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:38.608 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:38.608 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:38.609 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:38.609 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:38.609 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:38.609 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:38.609 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:38.609 05:04:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.609 05:04:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.609 [2024-12-14 05:04:49.352937] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:38.609 05:04:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.609 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:38.609 "name": "raid_bdev1", 00:15:38.609 "aliases": [ 00:15:38.609 "0eb9127e-4a26-43a9-9af1-f26a2fc1bd81" 00:15:38.609 ], 00:15:38.609 "product_name": "Raid Volume", 00:15:38.609 "block_size": 4096, 00:15:38.609 "num_blocks": 7936, 00:15:38.609 "uuid": "0eb9127e-4a26-43a9-9af1-f26a2fc1bd81", 00:15:38.609 "md_size": 32, 00:15:38.609 "md_interleave": false, 00:15:38.609 "dif_type": 0, 00:15:38.609 "assigned_rate_limits": { 00:15:38.609 "rw_ios_per_sec": 0, 00:15:38.609 "rw_mbytes_per_sec": 0, 00:15:38.609 "r_mbytes_per_sec": 0, 00:15:38.609 "w_mbytes_per_sec": 0 00:15:38.609 }, 00:15:38.609 "claimed": false, 00:15:38.609 "zoned": false, 00:15:38.609 "supported_io_types": { 00:15:38.609 "read": true, 00:15:38.609 "write": true, 00:15:38.609 "unmap": false, 00:15:38.609 "flush": false, 00:15:38.609 "reset": true, 00:15:38.609 "nvme_admin": false, 00:15:38.609 "nvme_io": false, 00:15:38.609 "nvme_io_md": false, 00:15:38.609 "write_zeroes": true, 00:15:38.609 "zcopy": false, 00:15:38.609 "get_zone_info": false, 00:15:38.609 "zone_management": false, 00:15:38.609 "zone_append": false, 00:15:38.609 "compare": false, 00:15:38.609 "compare_and_write": false, 00:15:38.609 "abort": false, 00:15:38.609 "seek_hole": false, 00:15:38.609 "seek_data": false, 00:15:38.609 "copy": false, 00:15:38.609 "nvme_iov_md": false 00:15:38.609 }, 00:15:38.609 "memory_domains": [ 00:15:38.609 { 00:15:38.609 "dma_device_id": "system", 00:15:38.609 "dma_device_type": 1 00:15:38.609 }, 00:15:38.609 { 00:15:38.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.609 "dma_device_type": 2 00:15:38.609 }, 00:15:38.609 { 00:15:38.609 "dma_device_id": "system", 00:15:38.609 "dma_device_type": 1 00:15:38.609 }, 00:15:38.609 { 00:15:38.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.609 "dma_device_type": 2 00:15:38.609 } 00:15:38.609 ], 00:15:38.609 "driver_specific": { 00:15:38.609 "raid": { 00:15:38.609 "uuid": "0eb9127e-4a26-43a9-9af1-f26a2fc1bd81", 00:15:38.609 "strip_size_kb": 0, 00:15:38.609 "state": "online", 00:15:38.609 "raid_level": "raid1", 00:15:38.609 "superblock": true, 00:15:38.609 "num_base_bdevs": 2, 00:15:38.609 "num_base_bdevs_discovered": 2, 00:15:38.609 "num_base_bdevs_operational": 2, 00:15:38.609 "base_bdevs_list": [ 00:15:38.609 { 00:15:38.609 "name": "pt1", 00:15:38.609 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:38.609 "is_configured": true, 00:15:38.609 "data_offset": 256, 00:15:38.609 "data_size": 7936 00:15:38.609 }, 00:15:38.609 { 00:15:38.609 "name": "pt2", 00:15:38.609 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.609 "is_configured": true, 00:15:38.609 "data_offset": 256, 00:15:38.609 "data_size": 7936 00:15:38.609 } 00:15:38.609 ] 00:15:38.609 } 00:15:38.609 } 00:15:38.609 }' 00:15:38.609 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:38.609 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:38.609 pt2' 00:15:38.609 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.609 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:38.609 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.609 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:38.609 05:04:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.609 05:04:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.609 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.869 [2024-12-14 05:04:49.584513] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 0eb9127e-4a26-43a9-9af1-f26a2fc1bd81 '!=' 0eb9127e-4a26-43a9-9af1-f26a2fc1bd81 ']' 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.869 [2024-12-14 05:04:49.628249] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.869 "name": "raid_bdev1", 00:15:38.869 "uuid": "0eb9127e-4a26-43a9-9af1-f26a2fc1bd81", 00:15:38.869 "strip_size_kb": 0, 00:15:38.869 "state": "online", 00:15:38.869 "raid_level": "raid1", 00:15:38.869 "superblock": true, 00:15:38.869 "num_base_bdevs": 2, 00:15:38.869 "num_base_bdevs_discovered": 1, 00:15:38.869 "num_base_bdevs_operational": 1, 00:15:38.869 "base_bdevs_list": [ 00:15:38.869 { 00:15:38.869 "name": null, 00:15:38.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.869 "is_configured": false, 00:15:38.869 "data_offset": 0, 00:15:38.869 "data_size": 7936 00:15:38.869 }, 00:15:38.869 { 00:15:38.869 "name": "pt2", 00:15:38.869 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.869 "is_configured": true, 00:15:38.869 "data_offset": 256, 00:15:38.869 "data_size": 7936 00:15:38.869 } 00:15:38.869 ] 00:15:38.869 }' 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.869 05:04:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.438 [2024-12-14 05:04:50.103389] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:39.438 [2024-12-14 05:04:50.103468] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:39.438 [2024-12-14 05:04:50.103534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.438 [2024-12-14 05:04:50.103593] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.438 [2024-12-14 05:04:50.103625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.438 [2024-12-14 05:04:50.175311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:39.438 [2024-12-14 05:04:50.175377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.438 [2024-12-14 05:04:50.175405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:39.438 [2024-12-14 05:04:50.175414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.438 [2024-12-14 05:04:50.177335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.438 [2024-12-14 05:04:50.177371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:39.438 [2024-12-14 05:04:50.177416] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:39.438 [2024-12-14 05:04:50.177439] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:39.438 [2024-12-14 05:04:50.177494] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:39.438 [2024-12-14 05:04:50.177502] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:39.438 [2024-12-14 05:04:50.177568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:39.438 [2024-12-14 05:04:50.177638] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:39.438 [2024-12-14 05:04:50.177648] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:15:39.438 [2024-12-14 05:04:50.177703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.438 pt2 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:39.438 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.439 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.439 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.439 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.439 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:39.439 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.439 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.439 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.439 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.439 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.439 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.439 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.439 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.439 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.439 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.439 "name": "raid_bdev1", 00:15:39.439 "uuid": "0eb9127e-4a26-43a9-9af1-f26a2fc1bd81", 00:15:39.439 "strip_size_kb": 0, 00:15:39.439 "state": "online", 00:15:39.439 "raid_level": "raid1", 00:15:39.439 "superblock": true, 00:15:39.439 "num_base_bdevs": 2, 00:15:39.439 "num_base_bdevs_discovered": 1, 00:15:39.439 "num_base_bdevs_operational": 1, 00:15:39.439 "base_bdevs_list": [ 00:15:39.439 { 00:15:39.439 "name": null, 00:15:39.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.439 "is_configured": false, 00:15:39.439 "data_offset": 256, 00:15:39.439 "data_size": 7936 00:15:39.439 }, 00:15:39.439 { 00:15:39.439 "name": "pt2", 00:15:39.439 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:39.439 "is_configured": true, 00:15:39.439 "data_offset": 256, 00:15:39.439 "data_size": 7936 00:15:39.439 } 00:15:39.439 ] 00:15:39.439 }' 00:15:39.439 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.439 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.008 [2024-12-14 05:04:50.618545] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:40.008 [2024-12-14 05:04:50.618605] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:40.008 [2024-12-14 05:04:50.618673] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:40.008 [2024-12-14 05:04:50.618721] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:40.008 [2024-12-14 05:04:50.618753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.008 [2024-12-14 05:04:50.678430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:40.008 [2024-12-14 05:04:50.678514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.008 [2024-12-14 05:04:50.678547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:40.008 [2024-12-14 05:04:50.678578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.008 [2024-12-14 05:04:50.680522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.008 [2024-12-14 05:04:50.680591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:40.008 [2024-12-14 05:04:50.680653] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:40.008 [2024-12-14 05:04:50.680714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:40.008 [2024-12-14 05:04:50.680844] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:40.008 [2024-12-14 05:04:50.680895] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:40.008 [2024-12-14 05:04:50.680927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:15:40.008 [2024-12-14 05:04:50.680991] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:40.008 [2024-12-14 05:04:50.681061] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:40.008 [2024-12-14 05:04:50.681100] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:40.008 [2024-12-14 05:04:50.681196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:40.008 [2024-12-14 05:04:50.681299] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:40.008 [2024-12-14 05:04:50.681333] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:40.008 [2024-12-14 05:04:50.681440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.008 pt1 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.008 "name": "raid_bdev1", 00:15:40.008 "uuid": "0eb9127e-4a26-43a9-9af1-f26a2fc1bd81", 00:15:40.008 "strip_size_kb": 0, 00:15:40.008 "state": "online", 00:15:40.008 "raid_level": "raid1", 00:15:40.008 "superblock": true, 00:15:40.008 "num_base_bdevs": 2, 00:15:40.008 "num_base_bdevs_discovered": 1, 00:15:40.008 "num_base_bdevs_operational": 1, 00:15:40.008 "base_bdevs_list": [ 00:15:40.008 { 00:15:40.008 "name": null, 00:15:40.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.008 "is_configured": false, 00:15:40.008 "data_offset": 256, 00:15:40.008 "data_size": 7936 00:15:40.008 }, 00:15:40.008 { 00:15:40.008 "name": "pt2", 00:15:40.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:40.008 "is_configured": true, 00:15:40.008 "data_offset": 256, 00:15:40.008 "data_size": 7936 00:15:40.008 } 00:15:40.008 ] 00:15:40.008 }' 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.008 05:04:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.268 05:04:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:40.268 05:04:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:40.268 05:04:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.268 05:04:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.268 05:04:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.268 05:04:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:40.268 05:04:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:40.268 05:04:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:40.268 05:04:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.268 05:04:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.268 [2024-12-14 05:04:51.117907] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:40.268 05:04:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.529 05:04:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 0eb9127e-4a26-43a9-9af1-f26a2fc1bd81 '!=' 0eb9127e-4a26-43a9-9af1-f26a2fc1bd81 ']' 00:15:40.529 05:04:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 97800 00:15:40.529 05:04:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97800 ']' 00:15:40.529 05:04:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 97800 00:15:40.529 05:04:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:15:40.529 05:04:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:40.529 05:04:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97800 00:15:40.529 05:04:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:40.529 killing process with pid 97800 00:15:40.529 05:04:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:40.529 05:04:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97800' 00:15:40.529 05:04:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 97800 00:15:40.529 [2024-12-14 05:04:51.200476] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:40.529 [2024-12-14 05:04:51.200536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:40.529 [2024-12-14 05:04:51.200572] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:40.529 [2024-12-14 05:04:51.200580] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:40.529 05:04:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 97800 00:15:40.529 [2024-12-14 05:04:51.225301] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:40.788 05:04:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:15:40.788 ************************************ 00:15:40.788 END TEST raid_superblock_test_md_separate 00:15:40.788 ************************************ 00:15:40.788 00:15:40.788 real 0m4.984s 00:15:40.788 user 0m8.022s 00:15:40.788 sys 0m1.182s 00:15:40.788 05:04:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:40.788 05:04:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.788 05:04:51 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:15:40.789 05:04:51 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:15:40.789 05:04:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:40.789 05:04:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:40.789 05:04:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:40.789 ************************************ 00:15:40.789 START TEST raid_rebuild_test_sb_md_separate 00:15:40.789 ************************************ 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=98116 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 98116 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98116 ']' 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:40.789 05:04:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.049 [2024-12-14 05:04:51.670431] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:41.049 [2024-12-14 05:04:51.670670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98116 ] 00:15:41.049 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:41.049 Zero copy mechanism will not be used. 00:15:41.049 [2024-12-14 05:04:51.844425] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.049 [2024-12-14 05:04:51.891131] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.309 [2024-12-14 05:04:51.934238] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.309 [2024-12-14 05:04:51.934347] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.878 BaseBdev1_malloc 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.878 [2024-12-14 05:04:52.493404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:41.878 [2024-12-14 05:04:52.493461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.878 [2024-12-14 05:04:52.493484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:41.878 [2024-12-14 05:04:52.493492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.878 [2024-12-14 05:04:52.495307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.878 [2024-12-14 05:04:52.495417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:41.878 BaseBdev1 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.878 BaseBdev2_malloc 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.878 [2024-12-14 05:04:52.537649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:41.878 [2024-12-14 05:04:52.537830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.878 [2024-12-14 05:04:52.537880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:41.878 [2024-12-14 05:04:52.537901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.878 [2024-12-14 05:04:52.541985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.878 [2024-12-14 05:04:52.542055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:41.878 BaseBdev2 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.878 spare_malloc 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.878 spare_delay 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.878 [2024-12-14 05:04:52.580867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:41.878 [2024-12-14 05:04:52.580959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.878 [2024-12-14 05:04:52.580983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:41.878 [2024-12-14 05:04:52.580993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.878 [2024-12-14 05:04:52.582856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.878 [2024-12-14 05:04:52.582892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:41.878 spare 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.878 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.878 [2024-12-14 05:04:52.592883] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:41.879 [2024-12-14 05:04:52.594636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.879 [2024-12-14 05:04:52.594787] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:41.879 [2024-12-14 05:04:52.594804] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:41.879 [2024-12-14 05:04:52.594880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:41.879 [2024-12-14 05:04:52.594968] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:41.879 [2024-12-14 05:04:52.594980] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:41.879 [2024-12-14 05:04:52.595050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.879 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.879 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:41.879 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.879 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.879 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.879 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.879 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:41.879 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.879 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.879 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.879 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.879 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.879 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.879 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.879 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.879 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.879 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.879 "name": "raid_bdev1", 00:15:41.879 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:41.879 "strip_size_kb": 0, 00:15:41.879 "state": "online", 00:15:41.879 "raid_level": "raid1", 00:15:41.879 "superblock": true, 00:15:41.879 "num_base_bdevs": 2, 00:15:41.879 "num_base_bdevs_discovered": 2, 00:15:41.879 "num_base_bdevs_operational": 2, 00:15:41.879 "base_bdevs_list": [ 00:15:41.879 { 00:15:41.879 "name": "BaseBdev1", 00:15:41.879 "uuid": "fb4a6135-3d79-5ccf-8cff-222063ac018f", 00:15:41.879 "is_configured": true, 00:15:41.879 "data_offset": 256, 00:15:41.879 "data_size": 7936 00:15:41.879 }, 00:15:41.879 { 00:15:41.879 "name": "BaseBdev2", 00:15:41.879 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:41.879 "is_configured": true, 00:15:41.879 "data_offset": 256, 00:15:41.879 "data_size": 7936 00:15:41.879 } 00:15:41.879 ] 00:15:41.879 }' 00:15:41.879 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.879 05:04:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:42.448 [2024-12-14 05:04:53.040391] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:42.448 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:42.448 [2024-12-14 05:04:53.307696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:42.448 /dev/nbd0 00:15:42.707 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:42.707 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:42.707 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:42.707 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:15:42.707 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:42.707 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:42.708 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:42.708 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:15:42.708 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:42.708 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:42.708 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:42.708 1+0 records in 00:15:42.708 1+0 records out 00:15:42.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453666 s, 9.0 MB/s 00:15:42.708 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.708 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:15:42.708 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.708 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:42.708 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:15:42.708 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:42.708 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:42.708 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:42.708 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:42.708 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:43.275 7936+0 records in 00:15:43.275 7936+0 records out 00:15:43.275 32505856 bytes (33 MB, 31 MiB) copied, 0.580672 s, 56.0 MB/s 00:15:43.275 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:43.275 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:43.275 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:43.275 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:43.275 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:15:43.275 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:43.275 05:04:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:43.534 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:43.534 [2024-12-14 05:04:54.177432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.534 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:43.535 [2024-12-14 05:04:54.189694] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.535 "name": "raid_bdev1", 00:15:43.535 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:43.535 "strip_size_kb": 0, 00:15:43.535 "state": "online", 00:15:43.535 "raid_level": "raid1", 00:15:43.535 "superblock": true, 00:15:43.535 "num_base_bdevs": 2, 00:15:43.535 "num_base_bdevs_discovered": 1, 00:15:43.535 "num_base_bdevs_operational": 1, 00:15:43.535 "base_bdevs_list": [ 00:15:43.535 { 00:15:43.535 "name": null, 00:15:43.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.535 "is_configured": false, 00:15:43.535 "data_offset": 0, 00:15:43.535 "data_size": 7936 00:15:43.535 }, 00:15:43.535 { 00:15:43.535 "name": "BaseBdev2", 00:15:43.535 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:43.535 "is_configured": true, 00:15:43.535 "data_offset": 256, 00:15:43.535 "data_size": 7936 00:15:43.535 } 00:15:43.535 ] 00:15:43.535 }' 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.535 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:43.795 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:43.795 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.795 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:43.795 [2024-12-14 05:04:54.648911] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:43.795 [2024-12-14 05:04:54.652029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:15:43.795 [2024-12-14 05:04:54.654237] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:43.795 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.795 05:04:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.179 "name": "raid_bdev1", 00:15:45.179 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:45.179 "strip_size_kb": 0, 00:15:45.179 "state": "online", 00:15:45.179 "raid_level": "raid1", 00:15:45.179 "superblock": true, 00:15:45.179 "num_base_bdevs": 2, 00:15:45.179 "num_base_bdevs_discovered": 2, 00:15:45.179 "num_base_bdevs_operational": 2, 00:15:45.179 "process": { 00:15:45.179 "type": "rebuild", 00:15:45.179 "target": "spare", 00:15:45.179 "progress": { 00:15:45.179 "blocks": 2560, 00:15:45.179 "percent": 32 00:15:45.179 } 00:15:45.179 }, 00:15:45.179 "base_bdevs_list": [ 00:15:45.179 { 00:15:45.179 "name": "spare", 00:15:45.179 "uuid": "8ee59576-e281-5f6a-bcff-aed9bcce5841", 00:15:45.179 "is_configured": true, 00:15:45.179 "data_offset": 256, 00:15:45.179 "data_size": 7936 00:15:45.179 }, 00:15:45.179 { 00:15:45.179 "name": "BaseBdev2", 00:15:45.179 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:45.179 "is_configured": true, 00:15:45.179 "data_offset": 256, 00:15:45.179 "data_size": 7936 00:15:45.179 } 00:15:45.179 ] 00:15:45.179 }' 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.179 [2024-12-14 05:04:55.797362] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:45.179 [2024-12-14 05:04:55.862923] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:45.179 [2024-12-14 05:04:55.863004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.179 [2024-12-14 05:04:55.863029] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:45.179 [2024-12-14 05:04:55.863045] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.179 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.179 "name": "raid_bdev1", 00:15:45.179 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:45.179 "strip_size_kb": 0, 00:15:45.179 "state": "online", 00:15:45.179 "raid_level": "raid1", 00:15:45.179 "superblock": true, 00:15:45.179 "num_base_bdevs": 2, 00:15:45.179 "num_base_bdevs_discovered": 1, 00:15:45.179 "num_base_bdevs_operational": 1, 00:15:45.179 "base_bdevs_list": [ 00:15:45.179 { 00:15:45.179 "name": null, 00:15:45.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.179 "is_configured": false, 00:15:45.179 "data_offset": 0, 00:15:45.179 "data_size": 7936 00:15:45.179 }, 00:15:45.179 { 00:15:45.179 "name": "BaseBdev2", 00:15:45.180 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:45.180 "is_configured": true, 00:15:45.180 "data_offset": 256, 00:15:45.180 "data_size": 7936 00:15:45.180 } 00:15:45.180 ] 00:15:45.180 }' 00:15:45.180 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.180 05:04:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.749 05:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:45.749 05:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.749 05:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:45.749 05:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:45.749 05:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.749 05:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.749 05:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.749 05:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.749 05:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.749 05:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.749 05:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.749 "name": "raid_bdev1", 00:15:45.749 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:45.749 "strip_size_kb": 0, 00:15:45.749 "state": "online", 00:15:45.749 "raid_level": "raid1", 00:15:45.749 "superblock": true, 00:15:45.749 "num_base_bdevs": 2, 00:15:45.749 "num_base_bdevs_discovered": 1, 00:15:45.749 "num_base_bdevs_operational": 1, 00:15:45.749 "base_bdevs_list": [ 00:15:45.749 { 00:15:45.749 "name": null, 00:15:45.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.749 "is_configured": false, 00:15:45.749 "data_offset": 0, 00:15:45.749 "data_size": 7936 00:15:45.749 }, 00:15:45.749 { 00:15:45.749 "name": "BaseBdev2", 00:15:45.749 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:45.749 "is_configured": true, 00:15:45.749 "data_offset": 256, 00:15:45.750 "data_size": 7936 00:15:45.750 } 00:15:45.750 ] 00:15:45.750 }' 00:15:45.750 05:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.750 05:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:45.750 05:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.750 05:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:45.750 05:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:45.750 05:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.750 05:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.750 [2024-12-14 05:04:56.482899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:45.750 [2024-12-14 05:04:56.485891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:15:45.750 [2024-12-14 05:04:56.488081] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:45.750 05:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.750 05:04:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:46.689 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.689 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.689 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.689 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.689 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.689 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.689 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.689 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.689 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.689 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.689 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.689 "name": "raid_bdev1", 00:15:46.689 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:46.689 "strip_size_kb": 0, 00:15:46.689 "state": "online", 00:15:46.689 "raid_level": "raid1", 00:15:46.689 "superblock": true, 00:15:46.689 "num_base_bdevs": 2, 00:15:46.689 "num_base_bdevs_discovered": 2, 00:15:46.689 "num_base_bdevs_operational": 2, 00:15:46.689 "process": { 00:15:46.689 "type": "rebuild", 00:15:46.689 "target": "spare", 00:15:46.689 "progress": { 00:15:46.689 "blocks": 2560, 00:15:46.689 "percent": 32 00:15:46.689 } 00:15:46.689 }, 00:15:46.689 "base_bdevs_list": [ 00:15:46.689 { 00:15:46.690 "name": "spare", 00:15:46.690 "uuid": "8ee59576-e281-5f6a-bcff-aed9bcce5841", 00:15:46.690 "is_configured": true, 00:15:46.690 "data_offset": 256, 00:15:46.690 "data_size": 7936 00:15:46.690 }, 00:15:46.690 { 00:15:46.690 "name": "BaseBdev2", 00:15:46.690 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:46.690 "is_configured": true, 00:15:46.690 "data_offset": 256, 00:15:46.690 "data_size": 7936 00:15:46.690 } 00:15:46.690 ] 00:15:46.690 }' 00:15:46.690 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:46.950 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=588 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.950 "name": "raid_bdev1", 00:15:46.950 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:46.950 "strip_size_kb": 0, 00:15:46.950 "state": "online", 00:15:46.950 "raid_level": "raid1", 00:15:46.950 "superblock": true, 00:15:46.950 "num_base_bdevs": 2, 00:15:46.950 "num_base_bdevs_discovered": 2, 00:15:46.950 "num_base_bdevs_operational": 2, 00:15:46.950 "process": { 00:15:46.950 "type": "rebuild", 00:15:46.950 "target": "spare", 00:15:46.950 "progress": { 00:15:46.950 "blocks": 2816, 00:15:46.950 "percent": 35 00:15:46.950 } 00:15:46.950 }, 00:15:46.950 "base_bdevs_list": [ 00:15:46.950 { 00:15:46.950 "name": "spare", 00:15:46.950 "uuid": "8ee59576-e281-5f6a-bcff-aed9bcce5841", 00:15:46.950 "is_configured": true, 00:15:46.950 "data_offset": 256, 00:15:46.950 "data_size": 7936 00:15:46.950 }, 00:15:46.950 { 00:15:46.950 "name": "BaseBdev2", 00:15:46.950 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:46.950 "is_configured": true, 00:15:46.950 "data_offset": 256, 00:15:46.950 "data_size": 7936 00:15:46.950 } 00:15:46.950 ] 00:15:46.950 }' 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.950 05:04:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:47.890 05:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:47.890 05:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.890 05:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.890 05:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.890 05:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.890 05:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.890 05:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.890 05:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.890 05:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.890 05:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.149 05:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.149 05:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.149 "name": "raid_bdev1", 00:15:48.149 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:48.149 "strip_size_kb": 0, 00:15:48.149 "state": "online", 00:15:48.149 "raid_level": "raid1", 00:15:48.149 "superblock": true, 00:15:48.149 "num_base_bdevs": 2, 00:15:48.149 "num_base_bdevs_discovered": 2, 00:15:48.149 "num_base_bdevs_operational": 2, 00:15:48.149 "process": { 00:15:48.149 "type": "rebuild", 00:15:48.149 "target": "spare", 00:15:48.149 "progress": { 00:15:48.149 "blocks": 5632, 00:15:48.149 "percent": 70 00:15:48.149 } 00:15:48.149 }, 00:15:48.149 "base_bdevs_list": [ 00:15:48.149 { 00:15:48.149 "name": "spare", 00:15:48.149 "uuid": "8ee59576-e281-5f6a-bcff-aed9bcce5841", 00:15:48.149 "is_configured": true, 00:15:48.149 "data_offset": 256, 00:15:48.149 "data_size": 7936 00:15:48.149 }, 00:15:48.149 { 00:15:48.149 "name": "BaseBdev2", 00:15:48.149 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:48.149 "is_configured": true, 00:15:48.149 "data_offset": 256, 00:15:48.149 "data_size": 7936 00:15:48.149 } 00:15:48.149 ] 00:15:48.149 }' 00:15:48.149 05:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.149 05:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.149 05:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.149 05:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.149 05:04:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:49.088 [2024-12-14 05:04:59.608657] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:49.088 [2024-12-14 05:04:59.608743] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:49.088 [2024-12-14 05:04:59.608864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.088 05:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:49.088 05:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.088 05:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.088 05:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.088 05:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.088 05:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.088 05:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.088 05:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.088 05:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.088 05:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.088 05:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.088 05:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.088 "name": "raid_bdev1", 00:15:49.088 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:49.088 "strip_size_kb": 0, 00:15:49.088 "state": "online", 00:15:49.088 "raid_level": "raid1", 00:15:49.088 "superblock": true, 00:15:49.088 "num_base_bdevs": 2, 00:15:49.088 "num_base_bdevs_discovered": 2, 00:15:49.088 "num_base_bdevs_operational": 2, 00:15:49.088 "base_bdevs_list": [ 00:15:49.088 { 00:15:49.088 "name": "spare", 00:15:49.088 "uuid": "8ee59576-e281-5f6a-bcff-aed9bcce5841", 00:15:49.088 "is_configured": true, 00:15:49.088 "data_offset": 256, 00:15:49.088 "data_size": 7936 00:15:49.088 }, 00:15:49.088 { 00:15:49.088 "name": "BaseBdev2", 00:15:49.088 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:49.088 "is_configured": true, 00:15:49.088 "data_offset": 256, 00:15:49.088 "data_size": 7936 00:15:49.088 } 00:15:49.088 ] 00:15:49.088 }' 00:15:49.088 05:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.348 05:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:49.348 05:04:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.348 "name": "raid_bdev1", 00:15:49.348 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:49.348 "strip_size_kb": 0, 00:15:49.348 "state": "online", 00:15:49.348 "raid_level": "raid1", 00:15:49.348 "superblock": true, 00:15:49.348 "num_base_bdevs": 2, 00:15:49.348 "num_base_bdevs_discovered": 2, 00:15:49.348 "num_base_bdevs_operational": 2, 00:15:49.348 "base_bdevs_list": [ 00:15:49.348 { 00:15:49.348 "name": "spare", 00:15:49.348 "uuid": "8ee59576-e281-5f6a-bcff-aed9bcce5841", 00:15:49.348 "is_configured": true, 00:15:49.348 "data_offset": 256, 00:15:49.348 "data_size": 7936 00:15:49.348 }, 00:15:49.348 { 00:15:49.348 "name": "BaseBdev2", 00:15:49.348 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:49.348 "is_configured": true, 00:15:49.348 "data_offset": 256, 00:15:49.348 "data_size": 7936 00:15:49.348 } 00:15:49.348 ] 00:15:49.348 }' 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.348 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.608 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.608 "name": "raid_bdev1", 00:15:49.608 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:49.608 "strip_size_kb": 0, 00:15:49.608 "state": "online", 00:15:49.608 "raid_level": "raid1", 00:15:49.608 "superblock": true, 00:15:49.608 "num_base_bdevs": 2, 00:15:49.608 "num_base_bdevs_discovered": 2, 00:15:49.608 "num_base_bdevs_operational": 2, 00:15:49.608 "base_bdevs_list": [ 00:15:49.608 { 00:15:49.608 "name": "spare", 00:15:49.608 "uuid": "8ee59576-e281-5f6a-bcff-aed9bcce5841", 00:15:49.608 "is_configured": true, 00:15:49.608 "data_offset": 256, 00:15:49.608 "data_size": 7936 00:15:49.608 }, 00:15:49.608 { 00:15:49.608 "name": "BaseBdev2", 00:15:49.608 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:49.608 "is_configured": true, 00:15:49.608 "data_offset": 256, 00:15:49.608 "data_size": 7936 00:15:49.608 } 00:15:49.608 ] 00:15:49.608 }' 00:15:49.608 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.608 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.868 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:49.868 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.868 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.868 [2024-12-14 05:05:00.544415] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:49.868 [2024-12-14 05:05:00.544509] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:49.868 [2024-12-14 05:05:00.544614] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.868 [2024-12-14 05:05:00.544715] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.868 [2024-12-14 05:05:00.544739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:49.868 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.868 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:15:49.868 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.868 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.868 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.868 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.868 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:49.868 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:49.868 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:49.868 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:49.868 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:49.868 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:49.868 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:49.868 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:49.868 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:49.868 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:15:49.868 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:49.868 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:49.868 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:50.128 /dev/nbd0 00:15:50.128 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:50.128 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:50.128 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:50.128 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:15:50.128 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:50.128 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:50.128 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:50.128 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:15:50.128 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:50.128 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:50.128 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:50.128 1+0 records in 00:15:50.128 1+0 records out 00:15:50.128 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000648835 s, 6.3 MB/s 00:15:50.128 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.128 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:15:50.128 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.128 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:50.128 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:15:50.128 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:50.128 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:50.128 05:05:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:50.390 /dev/nbd1 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:50.390 1+0 records in 00:15:50.390 1+0 records out 00:15:50.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539525 s, 7.6 MB/s 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.390 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:50.650 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:50.650 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:50.650 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:50.650 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:50.650 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:50.650 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:50.650 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:50.650 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:50.650 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.650 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.918 [2024-12-14 05:05:01.623495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:50.918 [2024-12-14 05:05:01.623568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.918 [2024-12-14 05:05:01.623592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:50.918 [2024-12-14 05:05:01.623608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.918 [2024-12-14 05:05:01.625909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.918 [2024-12-14 05:05:01.625962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:50.918 [2024-12-14 05:05:01.626051] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:50.918 [2024-12-14 05:05:01.626098] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:50.918 spare 00:15:50.918 [2024-12-14 05:05:01.626241] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.918 [2024-12-14 05:05:01.726182] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:50.918 [2024-12-14 05:05:01.726213] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:50.918 [2024-12-14 05:05:01.726375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:15:50.918 [2024-12-14 05:05:01.726569] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:50.918 [2024-12-14 05:05:01.726583] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:15:50.918 [2024-12-14 05:05:01.726710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.918 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.919 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.919 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.919 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.919 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.919 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.919 "name": "raid_bdev1", 00:15:50.919 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:50.919 "strip_size_kb": 0, 00:15:50.919 "state": "online", 00:15:50.919 "raid_level": "raid1", 00:15:50.919 "superblock": true, 00:15:50.919 "num_base_bdevs": 2, 00:15:50.919 "num_base_bdevs_discovered": 2, 00:15:50.919 "num_base_bdevs_operational": 2, 00:15:50.919 "base_bdevs_list": [ 00:15:50.919 { 00:15:50.919 "name": "spare", 00:15:50.919 "uuid": "8ee59576-e281-5f6a-bcff-aed9bcce5841", 00:15:50.919 "is_configured": true, 00:15:50.919 "data_offset": 256, 00:15:50.919 "data_size": 7936 00:15:50.919 }, 00:15:50.919 { 00:15:50.919 "name": "BaseBdev2", 00:15:50.919 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:50.919 "is_configured": true, 00:15:50.919 "data_offset": 256, 00:15:50.919 "data_size": 7936 00:15:50.919 } 00:15:50.919 ] 00:15:50.919 }' 00:15:50.919 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.919 05:05:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.489 "name": "raid_bdev1", 00:15:51.489 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:51.489 "strip_size_kb": 0, 00:15:51.489 "state": "online", 00:15:51.489 "raid_level": "raid1", 00:15:51.489 "superblock": true, 00:15:51.489 "num_base_bdevs": 2, 00:15:51.489 "num_base_bdevs_discovered": 2, 00:15:51.489 "num_base_bdevs_operational": 2, 00:15:51.489 "base_bdevs_list": [ 00:15:51.489 { 00:15:51.489 "name": "spare", 00:15:51.489 "uuid": "8ee59576-e281-5f6a-bcff-aed9bcce5841", 00:15:51.489 "is_configured": true, 00:15:51.489 "data_offset": 256, 00:15:51.489 "data_size": 7936 00:15:51.489 }, 00:15:51.489 { 00:15:51.489 "name": "BaseBdev2", 00:15:51.489 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:51.489 "is_configured": true, 00:15:51.489 "data_offset": 256, 00:15:51.489 "data_size": 7936 00:15:51.489 } 00:15:51.489 ] 00:15:51.489 }' 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.489 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.748 [2024-12-14 05:05:02.374263] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:51.748 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.748 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:51.748 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.748 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.748 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.748 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.748 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:51.748 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.748 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.748 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.748 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.748 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.748 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.748 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.748 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.748 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.748 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.748 "name": "raid_bdev1", 00:15:51.748 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:51.748 "strip_size_kb": 0, 00:15:51.748 "state": "online", 00:15:51.748 "raid_level": "raid1", 00:15:51.748 "superblock": true, 00:15:51.748 "num_base_bdevs": 2, 00:15:51.748 "num_base_bdevs_discovered": 1, 00:15:51.748 "num_base_bdevs_operational": 1, 00:15:51.748 "base_bdevs_list": [ 00:15:51.748 { 00:15:51.748 "name": null, 00:15:51.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.748 "is_configured": false, 00:15:51.749 "data_offset": 0, 00:15:51.749 "data_size": 7936 00:15:51.749 }, 00:15:51.749 { 00:15:51.749 "name": "BaseBdev2", 00:15:51.749 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:51.749 "is_configured": true, 00:15:51.749 "data_offset": 256, 00:15:51.749 "data_size": 7936 00:15:51.749 } 00:15:51.749 ] 00:15:51.749 }' 00:15:51.749 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.749 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.008 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:52.008 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.008 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.008 [2024-12-14 05:05:02.817515] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:52.008 [2024-12-14 05:05:02.817699] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:52.008 [2024-12-14 05:05:02.817725] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:52.008 [2024-12-14 05:05:02.817776] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:52.008 [2024-12-14 05:05:02.820556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:15:52.008 [2024-12-14 05:05:02.822781] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:52.008 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.008 05:05:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:53.389 05:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.389 05:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.389 05:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.389 05:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.389 05:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.389 05:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.389 05:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.389 05:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.389 05:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.389 05:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.389 05:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.389 "name": "raid_bdev1", 00:15:53.389 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:53.389 "strip_size_kb": 0, 00:15:53.389 "state": "online", 00:15:53.389 "raid_level": "raid1", 00:15:53.389 "superblock": true, 00:15:53.389 "num_base_bdevs": 2, 00:15:53.389 "num_base_bdevs_discovered": 2, 00:15:53.389 "num_base_bdevs_operational": 2, 00:15:53.389 "process": { 00:15:53.389 "type": "rebuild", 00:15:53.389 "target": "spare", 00:15:53.389 "progress": { 00:15:53.389 "blocks": 2560, 00:15:53.389 "percent": 32 00:15:53.389 } 00:15:53.389 }, 00:15:53.389 "base_bdevs_list": [ 00:15:53.389 { 00:15:53.389 "name": "spare", 00:15:53.389 "uuid": "8ee59576-e281-5f6a-bcff-aed9bcce5841", 00:15:53.389 "is_configured": true, 00:15:53.389 "data_offset": 256, 00:15:53.389 "data_size": 7936 00:15:53.389 }, 00:15:53.389 { 00:15:53.389 "name": "BaseBdev2", 00:15:53.389 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:53.389 "is_configured": true, 00:15:53.389 "data_offset": 256, 00:15:53.389 "data_size": 7936 00:15:53.389 } 00:15:53.389 ] 00:15:53.389 }' 00:15:53.389 05:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.389 05:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:53.389 05:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.389 05:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.389 05:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:53.389 05:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.389 05:05:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.389 [2024-12-14 05:05:03.990035] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:53.389 [2024-12-14 05:05:04.030646] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:53.389 [2024-12-14 05:05:04.030715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.389 [2024-12-14 05:05:04.030737] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:53.389 [2024-12-14 05:05:04.030746] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:53.389 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.389 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:53.389 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.389 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.389 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.389 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.389 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:53.389 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.389 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.389 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.389 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.389 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.389 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.389 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.389 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.389 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.389 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.389 "name": "raid_bdev1", 00:15:53.389 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:53.389 "strip_size_kb": 0, 00:15:53.389 "state": "online", 00:15:53.389 "raid_level": "raid1", 00:15:53.389 "superblock": true, 00:15:53.389 "num_base_bdevs": 2, 00:15:53.389 "num_base_bdevs_discovered": 1, 00:15:53.389 "num_base_bdevs_operational": 1, 00:15:53.389 "base_bdevs_list": [ 00:15:53.389 { 00:15:53.389 "name": null, 00:15:53.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.389 "is_configured": false, 00:15:53.389 "data_offset": 0, 00:15:53.389 "data_size": 7936 00:15:53.389 }, 00:15:53.389 { 00:15:53.389 "name": "BaseBdev2", 00:15:53.389 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:53.389 "is_configured": true, 00:15:53.389 "data_offset": 256, 00:15:53.389 "data_size": 7936 00:15:53.389 } 00:15:53.389 ] 00:15:53.389 }' 00:15:53.389 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.389 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.649 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:53.649 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.649 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.649 [2024-12-14 05:05:04.475357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:53.649 [2024-12-14 05:05:04.475496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.649 [2024-12-14 05:05:04.475544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:53.649 [2024-12-14 05:05:04.475580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.649 [2024-12-14 05:05:04.475886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.649 [2024-12-14 05:05:04.475949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:53.649 [2024-12-14 05:05:04.476049] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:53.649 [2024-12-14 05:05:04.476094] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:53.649 [2024-12-14 05:05:04.476155] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:53.649 [2024-12-14 05:05:04.476246] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:53.649 [2024-12-14 05:05:04.478566] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:15:53.649 [2024-12-14 05:05:04.480740] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:53.649 spare 00:15:53.649 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.649 05:05:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:55.029 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.029 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.029 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.029 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.029 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.029 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.029 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.029 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.029 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.029 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.029 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.029 "name": "raid_bdev1", 00:15:55.029 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:55.029 "strip_size_kb": 0, 00:15:55.029 "state": "online", 00:15:55.029 "raid_level": "raid1", 00:15:55.029 "superblock": true, 00:15:55.029 "num_base_bdevs": 2, 00:15:55.029 "num_base_bdevs_discovered": 2, 00:15:55.029 "num_base_bdevs_operational": 2, 00:15:55.029 "process": { 00:15:55.029 "type": "rebuild", 00:15:55.029 "target": "spare", 00:15:55.029 "progress": { 00:15:55.029 "blocks": 2560, 00:15:55.029 "percent": 32 00:15:55.029 } 00:15:55.029 }, 00:15:55.029 "base_bdevs_list": [ 00:15:55.029 { 00:15:55.029 "name": "spare", 00:15:55.029 "uuid": "8ee59576-e281-5f6a-bcff-aed9bcce5841", 00:15:55.029 "is_configured": true, 00:15:55.029 "data_offset": 256, 00:15:55.029 "data_size": 7936 00:15:55.029 }, 00:15:55.029 { 00:15:55.029 "name": "BaseBdev2", 00:15:55.029 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:55.029 "is_configured": true, 00:15:55.029 "data_offset": 256, 00:15:55.029 "data_size": 7936 00:15:55.029 } 00:15:55.029 ] 00:15:55.029 }' 00:15:55.029 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.029 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.029 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.030 [2024-12-14 05:05:05.644833] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:55.030 [2024-12-14 05:05:05.688650] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:55.030 [2024-12-14 05:05:05.688730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.030 [2024-12-14 05:05:05.688747] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:55.030 [2024-12-14 05:05:05.688759] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.030 "name": "raid_bdev1", 00:15:55.030 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:55.030 "strip_size_kb": 0, 00:15:55.030 "state": "online", 00:15:55.030 "raid_level": "raid1", 00:15:55.030 "superblock": true, 00:15:55.030 "num_base_bdevs": 2, 00:15:55.030 "num_base_bdevs_discovered": 1, 00:15:55.030 "num_base_bdevs_operational": 1, 00:15:55.030 "base_bdevs_list": [ 00:15:55.030 { 00:15:55.030 "name": null, 00:15:55.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.030 "is_configured": false, 00:15:55.030 "data_offset": 0, 00:15:55.030 "data_size": 7936 00:15:55.030 }, 00:15:55.030 { 00:15:55.030 "name": "BaseBdev2", 00:15:55.030 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:55.030 "is_configured": true, 00:15:55.030 "data_offset": 256, 00:15:55.030 "data_size": 7936 00:15:55.030 } 00:15:55.030 ] 00:15:55.030 }' 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.030 05:05:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.289 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:55.289 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.289 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:55.289 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:55.289 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.289 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.289 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.289 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.289 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.289 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.289 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.289 "name": "raid_bdev1", 00:15:55.289 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:55.289 "strip_size_kb": 0, 00:15:55.289 "state": "online", 00:15:55.289 "raid_level": "raid1", 00:15:55.289 "superblock": true, 00:15:55.289 "num_base_bdevs": 2, 00:15:55.289 "num_base_bdevs_discovered": 1, 00:15:55.289 "num_base_bdevs_operational": 1, 00:15:55.289 "base_bdevs_list": [ 00:15:55.289 { 00:15:55.289 "name": null, 00:15:55.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.289 "is_configured": false, 00:15:55.289 "data_offset": 0, 00:15:55.289 "data_size": 7936 00:15:55.289 }, 00:15:55.289 { 00:15:55.289 "name": "BaseBdev2", 00:15:55.289 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:55.289 "is_configured": true, 00:15:55.289 "data_offset": 256, 00:15:55.289 "data_size": 7936 00:15:55.289 } 00:15:55.289 ] 00:15:55.289 }' 00:15:55.289 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.549 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:55.549 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.549 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:55.549 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:55.549 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.549 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.549 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.549 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:55.549 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.549 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.549 [2024-12-14 05:05:06.252563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:55.549 [2024-12-14 05:05:06.252635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.549 [2024-12-14 05:05:06.252658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:55.549 [2024-12-14 05:05:06.252673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.549 [2024-12-14 05:05:06.252913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.549 [2024-12-14 05:05:06.252932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:55.549 [2024-12-14 05:05:06.252985] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:55.549 [2024-12-14 05:05:06.253042] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:55.549 [2024-12-14 05:05:06.253060] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:55.549 [2024-12-14 05:05:06.253076] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:55.549 BaseBdev1 00:15:55.549 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.549 05:05:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:56.488 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:56.488 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.488 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.488 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.488 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.488 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:56.488 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.488 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.488 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.488 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.488 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.488 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.488 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.488 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.488 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.488 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.488 "name": "raid_bdev1", 00:15:56.488 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:56.488 "strip_size_kb": 0, 00:15:56.488 "state": "online", 00:15:56.488 "raid_level": "raid1", 00:15:56.488 "superblock": true, 00:15:56.488 "num_base_bdevs": 2, 00:15:56.488 "num_base_bdevs_discovered": 1, 00:15:56.488 "num_base_bdevs_operational": 1, 00:15:56.488 "base_bdevs_list": [ 00:15:56.488 { 00:15:56.488 "name": null, 00:15:56.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.488 "is_configured": false, 00:15:56.488 "data_offset": 0, 00:15:56.488 "data_size": 7936 00:15:56.488 }, 00:15:56.488 { 00:15:56.488 "name": "BaseBdev2", 00:15:56.488 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:56.488 "is_configured": true, 00:15:56.488 "data_offset": 256, 00:15:56.488 "data_size": 7936 00:15:56.488 } 00:15:56.488 ] 00:15:56.488 }' 00:15:56.488 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.488 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.058 "name": "raid_bdev1", 00:15:57.058 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:57.058 "strip_size_kb": 0, 00:15:57.058 "state": "online", 00:15:57.058 "raid_level": "raid1", 00:15:57.058 "superblock": true, 00:15:57.058 "num_base_bdevs": 2, 00:15:57.058 "num_base_bdevs_discovered": 1, 00:15:57.058 "num_base_bdevs_operational": 1, 00:15:57.058 "base_bdevs_list": [ 00:15:57.058 { 00:15:57.058 "name": null, 00:15:57.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.058 "is_configured": false, 00:15:57.058 "data_offset": 0, 00:15:57.058 "data_size": 7936 00:15:57.058 }, 00:15:57.058 { 00:15:57.058 "name": "BaseBdev2", 00:15:57.058 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:57.058 "is_configured": true, 00:15:57.058 "data_offset": 256, 00:15:57.058 "data_size": 7936 00:15:57.058 } 00:15:57.058 ] 00:15:57.058 }' 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.058 [2024-12-14 05:05:07.878085] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:57.058 [2024-12-14 05:05:07.878357] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:57.058 [2024-12-14 05:05:07.878423] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:57.058 request: 00:15:57.058 { 00:15:57.058 "base_bdev": "BaseBdev1", 00:15:57.058 "raid_bdev": "raid_bdev1", 00:15:57.058 "method": "bdev_raid_add_base_bdev", 00:15:57.058 "req_id": 1 00:15:57.058 } 00:15:57.058 Got JSON-RPC error response 00:15:57.058 response: 00:15:57.058 { 00:15:57.058 "code": -22, 00:15:57.058 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:57.058 } 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:57.058 05:05:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:58.438 05:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:58.438 05:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.438 05:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.439 05:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.439 05:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.439 05:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:58.439 05:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.439 05:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.439 05:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.439 05:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.439 05:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.439 05:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.439 05:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.439 05:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.439 05:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.439 05:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.439 "name": "raid_bdev1", 00:15:58.439 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:58.439 "strip_size_kb": 0, 00:15:58.439 "state": "online", 00:15:58.439 "raid_level": "raid1", 00:15:58.439 "superblock": true, 00:15:58.439 "num_base_bdevs": 2, 00:15:58.439 "num_base_bdevs_discovered": 1, 00:15:58.439 "num_base_bdevs_operational": 1, 00:15:58.439 "base_bdevs_list": [ 00:15:58.439 { 00:15:58.439 "name": null, 00:15:58.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.439 "is_configured": false, 00:15:58.439 "data_offset": 0, 00:15:58.439 "data_size": 7936 00:15:58.439 }, 00:15:58.439 { 00:15:58.439 "name": "BaseBdev2", 00:15:58.439 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:58.439 "is_configured": true, 00:15:58.439 "data_offset": 256, 00:15:58.439 "data_size": 7936 00:15:58.439 } 00:15:58.439 ] 00:15:58.439 }' 00:15:58.439 05:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.439 05:05:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.439 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:58.439 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.439 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:58.439 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:58.439 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.698 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.698 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.698 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.698 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.698 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.698 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.698 "name": "raid_bdev1", 00:15:58.698 "uuid": "7e5f4903-2745-481f-9031-26340d811bda", 00:15:58.698 "strip_size_kb": 0, 00:15:58.698 "state": "online", 00:15:58.698 "raid_level": "raid1", 00:15:58.698 "superblock": true, 00:15:58.698 "num_base_bdevs": 2, 00:15:58.698 "num_base_bdevs_discovered": 1, 00:15:58.698 "num_base_bdevs_operational": 1, 00:15:58.698 "base_bdevs_list": [ 00:15:58.698 { 00:15:58.698 "name": null, 00:15:58.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.698 "is_configured": false, 00:15:58.698 "data_offset": 0, 00:15:58.698 "data_size": 7936 00:15:58.698 }, 00:15:58.698 { 00:15:58.698 "name": "BaseBdev2", 00:15:58.698 "uuid": "444ed6a0-3c52-546a-8be2-69995b789b01", 00:15:58.698 "is_configured": true, 00:15:58.698 "data_offset": 256, 00:15:58.698 "data_size": 7936 00:15:58.698 } 00:15:58.698 ] 00:15:58.698 }' 00:15:58.698 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.698 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:58.698 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.698 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:58.698 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 98116 00:15:58.698 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98116 ']' 00:15:58.698 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 98116 00:15:58.698 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:15:58.698 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:58.698 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98116 00:15:58.698 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:58.698 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:58.698 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98116' 00:15:58.698 killing process with pid 98116 00:15:58.698 Received shutdown signal, test time was about 60.000000 seconds 00:15:58.698 00:15:58.698 Latency(us) 00:15:58.698 [2024-12-14T05:05:09.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.698 [2024-12-14T05:05:09.581Z] =================================================================================================================== 00:15:58.698 [2024-12-14T05:05:09.581Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:58.698 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 98116 00:15:58.698 [2024-12-14 05:05:09.485803] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:58.698 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 98116 00:15:58.698 [2024-12-14 05:05:09.485961] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.698 [2024-12-14 05:05:09.486022] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.698 [2024-12-14 05:05:09.486033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:58.698 [2024-12-14 05:05:09.547825] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:59.268 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:15:59.268 00:15:59.268 real 0m18.355s 00:15:59.268 user 0m24.203s 00:15:59.268 sys 0m2.642s 00:15:59.268 ************************************ 00:15:59.268 END TEST raid_rebuild_test_sb_md_separate 00:15:59.268 ************************************ 00:15:59.268 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:59.268 05:05:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.268 05:05:09 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:15:59.268 05:05:09 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:15:59.268 05:05:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:59.268 05:05:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:59.268 05:05:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:59.268 ************************************ 00:15:59.268 START TEST raid_state_function_test_sb_md_interleaved 00:15:59.268 ************************************ 00:15:59.268 05:05:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:59.268 05:05:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:59.268 05:05:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:59.268 05:05:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:59.268 05:05:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:59.268 05:05:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:59.268 05:05:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:59.268 05:05:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:59.268 05:05:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:59.268 05:05:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:59.268 05:05:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:59.268 05:05:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:59.268 05:05:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:59.268 05:05:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:59.268 05:05:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:59.268 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:59.268 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:59.268 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:59.268 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:59.268 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:59.268 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:59.268 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:59.268 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:59.268 Process raid pid: 98791 00:15:59.268 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=98791 00:15:59.268 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:59.268 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98791' 00:15:59.268 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 98791 00:15:59.268 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 98791 ']' 00:15:59.268 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.268 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:59.268 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.268 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:59.268 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.268 [2024-12-14 05:05:10.101459] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:59.268 [2024-12-14 05:05:10.101738] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.528 [2024-12-14 05:05:10.269634] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.528 [2024-12-14 05:05:10.343276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.787 [2024-12-14 05:05:10.421391] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.787 [2024-12-14 05:05:10.421517] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:00.046 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:00.046 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:00.046 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:00.046 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.046 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.046 [2024-12-14 05:05:10.918107] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:00.046 [2024-12-14 05:05:10.918214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:00.046 [2024-12-14 05:05:10.918231] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:00.046 [2024-12-14 05:05:10.918244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:00.046 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.046 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:00.047 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.047 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.047 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.047 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.047 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:00.047 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.047 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.047 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.047 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.306 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.306 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.306 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.306 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.306 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.306 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.306 "name": "Existed_Raid", 00:16:00.307 "uuid": "f0cdac99-bed3-471e-965c-24f07b83a718", 00:16:00.307 "strip_size_kb": 0, 00:16:00.307 "state": "configuring", 00:16:00.307 "raid_level": "raid1", 00:16:00.307 "superblock": true, 00:16:00.307 "num_base_bdevs": 2, 00:16:00.307 "num_base_bdevs_discovered": 0, 00:16:00.307 "num_base_bdevs_operational": 2, 00:16:00.307 "base_bdevs_list": [ 00:16:00.307 { 00:16:00.307 "name": "BaseBdev1", 00:16:00.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.307 "is_configured": false, 00:16:00.307 "data_offset": 0, 00:16:00.307 "data_size": 0 00:16:00.307 }, 00:16:00.307 { 00:16:00.307 "name": "BaseBdev2", 00:16:00.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.307 "is_configured": false, 00:16:00.307 "data_offset": 0, 00:16:00.307 "data_size": 0 00:16:00.307 } 00:16:00.307 ] 00:16:00.307 }' 00:16:00.307 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.307 05:05:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.567 [2024-12-14 05:05:11.345270] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:00.567 [2024-12-14 05:05:11.345378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.567 [2024-12-14 05:05:11.357296] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:00.567 [2024-12-14 05:05:11.357387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:00.567 [2024-12-14 05:05:11.357434] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:00.567 [2024-12-14 05:05:11.357462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.567 [2024-12-14 05:05:11.384996] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.567 BaseBdev1 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.567 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.567 [ 00:16:00.567 { 00:16:00.567 "name": "BaseBdev1", 00:16:00.567 "aliases": [ 00:16:00.567 "82dc62d1-1656-4365-8cba-dd7c91cc4a0d" 00:16:00.567 ], 00:16:00.567 "product_name": "Malloc disk", 00:16:00.567 "block_size": 4128, 00:16:00.567 "num_blocks": 8192, 00:16:00.567 "uuid": "82dc62d1-1656-4365-8cba-dd7c91cc4a0d", 00:16:00.567 "md_size": 32, 00:16:00.567 "md_interleave": true, 00:16:00.567 "dif_type": 0, 00:16:00.567 "assigned_rate_limits": { 00:16:00.567 "rw_ios_per_sec": 0, 00:16:00.567 "rw_mbytes_per_sec": 0, 00:16:00.567 "r_mbytes_per_sec": 0, 00:16:00.567 "w_mbytes_per_sec": 0 00:16:00.567 }, 00:16:00.567 "claimed": true, 00:16:00.567 "claim_type": "exclusive_write", 00:16:00.567 "zoned": false, 00:16:00.567 "supported_io_types": { 00:16:00.567 "read": true, 00:16:00.567 "write": true, 00:16:00.567 "unmap": true, 00:16:00.567 "flush": true, 00:16:00.567 "reset": true, 00:16:00.567 "nvme_admin": false, 00:16:00.567 "nvme_io": false, 00:16:00.567 "nvme_io_md": false, 00:16:00.567 "write_zeroes": true, 00:16:00.567 "zcopy": true, 00:16:00.567 "get_zone_info": false, 00:16:00.567 "zone_management": false, 00:16:00.567 "zone_append": false, 00:16:00.567 "compare": false, 00:16:00.567 "compare_and_write": false, 00:16:00.568 "abort": true, 00:16:00.568 "seek_hole": false, 00:16:00.568 "seek_data": false, 00:16:00.568 "copy": true, 00:16:00.568 "nvme_iov_md": false 00:16:00.568 }, 00:16:00.568 "memory_domains": [ 00:16:00.568 { 00:16:00.568 "dma_device_id": "system", 00:16:00.568 "dma_device_type": 1 00:16:00.568 }, 00:16:00.568 { 00:16:00.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.568 "dma_device_type": 2 00:16:00.568 } 00:16:00.568 ], 00:16:00.568 "driver_specific": {} 00:16:00.568 } 00:16:00.568 ] 00:16:00.568 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.568 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:16:00.568 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:00.568 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.568 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.568 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.568 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.568 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:00.568 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.568 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.568 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.568 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.568 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.568 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.568 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.568 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.827 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.827 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.827 "name": "Existed_Raid", 00:16:00.827 "uuid": "3d03a703-6059-4bf6-8235-d4a1306215ad", 00:16:00.827 "strip_size_kb": 0, 00:16:00.827 "state": "configuring", 00:16:00.827 "raid_level": "raid1", 00:16:00.827 "superblock": true, 00:16:00.827 "num_base_bdevs": 2, 00:16:00.827 "num_base_bdevs_discovered": 1, 00:16:00.827 "num_base_bdevs_operational": 2, 00:16:00.827 "base_bdevs_list": [ 00:16:00.827 { 00:16:00.827 "name": "BaseBdev1", 00:16:00.827 "uuid": "82dc62d1-1656-4365-8cba-dd7c91cc4a0d", 00:16:00.827 "is_configured": true, 00:16:00.827 "data_offset": 256, 00:16:00.827 "data_size": 7936 00:16:00.827 }, 00:16:00.827 { 00:16:00.827 "name": "BaseBdev2", 00:16:00.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.827 "is_configured": false, 00:16:00.827 "data_offset": 0, 00:16:00.827 "data_size": 0 00:16:00.827 } 00:16:00.827 ] 00:16:00.827 }' 00:16:00.827 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.827 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.086 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:01.086 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.086 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.086 [2024-12-14 05:05:11.840262] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:01.086 [2024-12-14 05:05:11.840314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:16:01.086 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.086 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:01.086 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.086 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.086 [2024-12-14 05:05:11.852354] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:01.086 [2024-12-14 05:05:11.854470] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:01.086 [2024-12-14 05:05:11.854574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:01.086 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.086 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:01.086 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:01.086 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:01.086 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.086 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.086 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.086 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.086 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:01.087 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.087 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.087 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.087 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.087 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.087 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.087 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.087 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.087 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.087 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.087 "name": "Existed_Raid", 00:16:01.087 "uuid": "61ad51fe-3721-4c5c-9033-74c287ed4b43", 00:16:01.087 "strip_size_kb": 0, 00:16:01.087 "state": "configuring", 00:16:01.087 "raid_level": "raid1", 00:16:01.087 "superblock": true, 00:16:01.087 "num_base_bdevs": 2, 00:16:01.087 "num_base_bdevs_discovered": 1, 00:16:01.087 "num_base_bdevs_operational": 2, 00:16:01.087 "base_bdevs_list": [ 00:16:01.087 { 00:16:01.087 "name": "BaseBdev1", 00:16:01.087 "uuid": "82dc62d1-1656-4365-8cba-dd7c91cc4a0d", 00:16:01.087 "is_configured": true, 00:16:01.087 "data_offset": 256, 00:16:01.087 "data_size": 7936 00:16:01.087 }, 00:16:01.087 { 00:16:01.087 "name": "BaseBdev2", 00:16:01.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.087 "is_configured": false, 00:16:01.087 "data_offset": 0, 00:16:01.087 "data_size": 0 00:16:01.087 } 00:16:01.087 ] 00:16:01.087 }' 00:16:01.087 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.087 05:05:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.656 [2024-12-14 05:05:12.289973] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:01.656 [2024-12-14 05:05:12.290826] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:01.656 [2024-12-14 05:05:12.291026] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:01.656 BaseBdev2 00:16:01.656 [2024-12-14 05:05:12.291481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:01.656 [2024-12-14 05:05:12.291763] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:01.656 [2024-12-14 05:05:12.291883] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.656 [2024-12-14 05:05:12.292135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.656 [ 00:16:01.656 { 00:16:01.656 "name": "BaseBdev2", 00:16:01.656 "aliases": [ 00:16:01.656 "55887598-bc0e-4c3e-81ab-74886597703b" 00:16:01.656 ], 00:16:01.656 "product_name": "Malloc disk", 00:16:01.656 "block_size": 4128, 00:16:01.656 "num_blocks": 8192, 00:16:01.656 "uuid": "55887598-bc0e-4c3e-81ab-74886597703b", 00:16:01.656 "md_size": 32, 00:16:01.656 "md_interleave": true, 00:16:01.656 "dif_type": 0, 00:16:01.656 "assigned_rate_limits": { 00:16:01.656 "rw_ios_per_sec": 0, 00:16:01.656 "rw_mbytes_per_sec": 0, 00:16:01.656 "r_mbytes_per_sec": 0, 00:16:01.656 "w_mbytes_per_sec": 0 00:16:01.656 }, 00:16:01.656 "claimed": true, 00:16:01.656 "claim_type": "exclusive_write", 00:16:01.656 "zoned": false, 00:16:01.656 "supported_io_types": { 00:16:01.656 "read": true, 00:16:01.656 "write": true, 00:16:01.656 "unmap": true, 00:16:01.656 "flush": true, 00:16:01.656 "reset": true, 00:16:01.656 "nvme_admin": false, 00:16:01.656 "nvme_io": false, 00:16:01.656 "nvme_io_md": false, 00:16:01.656 "write_zeroes": true, 00:16:01.656 "zcopy": true, 00:16:01.656 "get_zone_info": false, 00:16:01.656 "zone_management": false, 00:16:01.656 "zone_append": false, 00:16:01.656 "compare": false, 00:16:01.656 "compare_and_write": false, 00:16:01.656 "abort": true, 00:16:01.656 "seek_hole": false, 00:16:01.656 "seek_data": false, 00:16:01.656 "copy": true, 00:16:01.656 "nvme_iov_md": false 00:16:01.656 }, 00:16:01.656 "memory_domains": [ 00:16:01.656 { 00:16:01.656 "dma_device_id": "system", 00:16:01.656 "dma_device_type": 1 00:16:01.656 }, 00:16:01.656 { 00:16:01.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.656 "dma_device_type": 2 00:16:01.656 } 00:16:01.656 ], 00:16:01.656 "driver_specific": {} 00:16:01.656 } 00:16:01.656 ] 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.656 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.657 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:01.657 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.657 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.657 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.657 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.657 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.657 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.657 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.657 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.657 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.657 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.657 "name": "Existed_Raid", 00:16:01.657 "uuid": "61ad51fe-3721-4c5c-9033-74c287ed4b43", 00:16:01.657 "strip_size_kb": 0, 00:16:01.657 "state": "online", 00:16:01.657 "raid_level": "raid1", 00:16:01.657 "superblock": true, 00:16:01.657 "num_base_bdevs": 2, 00:16:01.657 "num_base_bdevs_discovered": 2, 00:16:01.657 "num_base_bdevs_operational": 2, 00:16:01.657 "base_bdevs_list": [ 00:16:01.657 { 00:16:01.657 "name": "BaseBdev1", 00:16:01.657 "uuid": "82dc62d1-1656-4365-8cba-dd7c91cc4a0d", 00:16:01.657 "is_configured": true, 00:16:01.657 "data_offset": 256, 00:16:01.657 "data_size": 7936 00:16:01.657 }, 00:16:01.657 { 00:16:01.657 "name": "BaseBdev2", 00:16:01.657 "uuid": "55887598-bc0e-4c3e-81ab-74886597703b", 00:16:01.657 "is_configured": true, 00:16:01.657 "data_offset": 256, 00:16:01.657 "data_size": 7936 00:16:01.657 } 00:16:01.657 ] 00:16:01.657 }' 00:16:01.657 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.657 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.916 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:01.916 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:01.917 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:01.917 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:01.917 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:01.917 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:01.917 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:01.917 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:01.917 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.917 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.917 [2024-12-14 05:05:12.797370] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:02.177 "name": "Existed_Raid", 00:16:02.177 "aliases": [ 00:16:02.177 "61ad51fe-3721-4c5c-9033-74c287ed4b43" 00:16:02.177 ], 00:16:02.177 "product_name": "Raid Volume", 00:16:02.177 "block_size": 4128, 00:16:02.177 "num_blocks": 7936, 00:16:02.177 "uuid": "61ad51fe-3721-4c5c-9033-74c287ed4b43", 00:16:02.177 "md_size": 32, 00:16:02.177 "md_interleave": true, 00:16:02.177 "dif_type": 0, 00:16:02.177 "assigned_rate_limits": { 00:16:02.177 "rw_ios_per_sec": 0, 00:16:02.177 "rw_mbytes_per_sec": 0, 00:16:02.177 "r_mbytes_per_sec": 0, 00:16:02.177 "w_mbytes_per_sec": 0 00:16:02.177 }, 00:16:02.177 "claimed": false, 00:16:02.177 "zoned": false, 00:16:02.177 "supported_io_types": { 00:16:02.177 "read": true, 00:16:02.177 "write": true, 00:16:02.177 "unmap": false, 00:16:02.177 "flush": false, 00:16:02.177 "reset": true, 00:16:02.177 "nvme_admin": false, 00:16:02.177 "nvme_io": false, 00:16:02.177 "nvme_io_md": false, 00:16:02.177 "write_zeroes": true, 00:16:02.177 "zcopy": false, 00:16:02.177 "get_zone_info": false, 00:16:02.177 "zone_management": false, 00:16:02.177 "zone_append": false, 00:16:02.177 "compare": false, 00:16:02.177 "compare_and_write": false, 00:16:02.177 "abort": false, 00:16:02.177 "seek_hole": false, 00:16:02.177 "seek_data": false, 00:16:02.177 "copy": false, 00:16:02.177 "nvme_iov_md": false 00:16:02.177 }, 00:16:02.177 "memory_domains": [ 00:16:02.177 { 00:16:02.177 "dma_device_id": "system", 00:16:02.177 "dma_device_type": 1 00:16:02.177 }, 00:16:02.177 { 00:16:02.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.177 "dma_device_type": 2 00:16:02.177 }, 00:16:02.177 { 00:16:02.177 "dma_device_id": "system", 00:16:02.177 "dma_device_type": 1 00:16:02.177 }, 00:16:02.177 { 00:16:02.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.177 "dma_device_type": 2 00:16:02.177 } 00:16:02.177 ], 00:16:02.177 "driver_specific": { 00:16:02.177 "raid": { 00:16:02.177 "uuid": "61ad51fe-3721-4c5c-9033-74c287ed4b43", 00:16:02.177 "strip_size_kb": 0, 00:16:02.177 "state": "online", 00:16:02.177 "raid_level": "raid1", 00:16:02.177 "superblock": true, 00:16:02.177 "num_base_bdevs": 2, 00:16:02.177 "num_base_bdevs_discovered": 2, 00:16:02.177 "num_base_bdevs_operational": 2, 00:16:02.177 "base_bdevs_list": [ 00:16:02.177 { 00:16:02.177 "name": "BaseBdev1", 00:16:02.177 "uuid": "82dc62d1-1656-4365-8cba-dd7c91cc4a0d", 00:16:02.177 "is_configured": true, 00:16:02.177 "data_offset": 256, 00:16:02.177 "data_size": 7936 00:16:02.177 }, 00:16:02.177 { 00:16:02.177 "name": "BaseBdev2", 00:16:02.177 "uuid": "55887598-bc0e-4c3e-81ab-74886597703b", 00:16:02.177 "is_configured": true, 00:16:02.177 "data_offset": 256, 00:16:02.177 "data_size": 7936 00:16:02.177 } 00:16:02.177 ] 00:16:02.177 } 00:16:02.177 } 00:16:02.177 }' 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:02.177 BaseBdev2' 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.177 05:05:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.177 [2024-12-14 05:05:12.996790] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:02.177 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.177 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:02.177 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:02.177 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:02.177 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:02.177 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:02.177 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:02.177 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.177 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.177 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.177 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.177 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:02.177 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.177 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.177 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.177 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.177 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.177 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.177 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.177 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.177 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.437 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.437 "name": "Existed_Raid", 00:16:02.437 "uuid": "61ad51fe-3721-4c5c-9033-74c287ed4b43", 00:16:02.437 "strip_size_kb": 0, 00:16:02.437 "state": "online", 00:16:02.438 "raid_level": "raid1", 00:16:02.438 "superblock": true, 00:16:02.438 "num_base_bdevs": 2, 00:16:02.438 "num_base_bdevs_discovered": 1, 00:16:02.438 "num_base_bdevs_operational": 1, 00:16:02.438 "base_bdevs_list": [ 00:16:02.438 { 00:16:02.438 "name": null, 00:16:02.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.438 "is_configured": false, 00:16:02.438 "data_offset": 0, 00:16:02.438 "data_size": 7936 00:16:02.438 }, 00:16:02.438 { 00:16:02.438 "name": "BaseBdev2", 00:16:02.438 "uuid": "55887598-bc0e-4c3e-81ab-74886597703b", 00:16:02.438 "is_configured": true, 00:16:02.438 "data_offset": 256, 00:16:02.438 "data_size": 7936 00:16:02.438 } 00:16:02.438 ] 00:16:02.438 }' 00:16:02.438 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.438 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.698 [2024-12-14 05:05:13.509549] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:02.698 [2024-12-14 05:05:13.509718] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:02.698 [2024-12-14 05:05:13.531548] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.698 [2024-12-14 05:05:13.531615] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.698 [2024-12-14 05:05:13.531631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 98791 00:16:02.698 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 98791 ']' 00:16:02.958 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 98791 00:16:02.958 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:02.958 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:02.958 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98791 00:16:02.958 killing process with pid 98791 00:16:02.958 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:02.958 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:02.958 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98791' 00:16:02.958 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 98791 00:16:02.958 [2024-12-14 05:05:13.618770] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:02.958 05:05:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 98791 00:16:02.958 [2024-12-14 05:05:13.620389] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:03.218 05:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:16:03.218 00:16:03.218 real 0m4.011s 00:16:03.218 user 0m6.017s 00:16:03.218 sys 0m0.955s 00:16:03.218 05:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:03.218 05:05:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:03.218 ************************************ 00:16:03.218 END TEST raid_state_function_test_sb_md_interleaved 00:16:03.218 ************************************ 00:16:03.218 05:05:14 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:16:03.218 05:05:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:03.218 05:05:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:03.218 05:05:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:03.218 ************************************ 00:16:03.218 START TEST raid_superblock_test_md_interleaved 00:16:03.218 ************************************ 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=99031 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 99031 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99031 ']' 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:03.218 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:03.478 [2024-12-14 05:05:14.167217] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:03.478 [2024-12-14 05:05:14.167442] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99031 ] 00:16:03.478 [2024-12-14 05:05:14.328816] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.738 [2024-12-14 05:05:14.402180] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.738 [2024-12-14 05:05:14.480228] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.738 [2024-12-14 05:05:14.480390] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.310 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:04.310 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:04.310 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:04.310 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:04.310 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:04.310 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:04.310 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:04.310 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:04.310 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:04.310 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:04.310 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:16:04.310 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.310 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.310 malloc1 00:16:04.310 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.310 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:04.310 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.310 05:05:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.310 [2024-12-14 05:05:15.004950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:04.310 [2024-12-14 05:05:15.005145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.310 [2024-12-14 05:05:15.005217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:04.310 [2024-12-14 05:05:15.005263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.310 [2024-12-14 05:05:15.007503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.310 [2024-12-14 05:05:15.007616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:04.310 pt1 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.310 malloc2 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.310 [2024-12-14 05:05:15.059508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:04.310 [2024-12-14 05:05:15.059724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.310 [2024-12-14 05:05:15.059810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:04.310 [2024-12-14 05:05:15.059905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.310 [2024-12-14 05:05:15.064212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.310 [2024-12-14 05:05:15.064374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:04.310 pt2 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.310 [2024-12-14 05:05:15.072726] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:04.310 [2024-12-14 05:05:15.075746] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:04.310 [2024-12-14 05:05:15.075975] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:04.310 [2024-12-14 05:05:15.076004] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:04.310 [2024-12-14 05:05:15.076132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:04.310 [2024-12-14 05:05:15.076285] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:04.310 [2024-12-14 05:05:15.076306] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:04.310 [2024-12-14 05:05:15.076415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.310 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.311 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:04.311 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.311 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.311 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.311 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.311 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.311 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.311 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.311 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.311 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.311 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.311 "name": "raid_bdev1", 00:16:04.311 "uuid": "74e96a9e-2013-4f39-8f63-00742479d840", 00:16:04.311 "strip_size_kb": 0, 00:16:04.311 "state": "online", 00:16:04.311 "raid_level": "raid1", 00:16:04.311 "superblock": true, 00:16:04.311 "num_base_bdevs": 2, 00:16:04.311 "num_base_bdevs_discovered": 2, 00:16:04.311 "num_base_bdevs_operational": 2, 00:16:04.311 "base_bdevs_list": [ 00:16:04.311 { 00:16:04.311 "name": "pt1", 00:16:04.311 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:04.311 "is_configured": true, 00:16:04.311 "data_offset": 256, 00:16:04.311 "data_size": 7936 00:16:04.311 }, 00:16:04.311 { 00:16:04.311 "name": "pt2", 00:16:04.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.311 "is_configured": true, 00:16:04.311 "data_offset": 256, 00:16:04.311 "data_size": 7936 00:16:04.311 } 00:16:04.311 ] 00:16:04.311 }' 00:16:04.311 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.311 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.901 [2024-12-14 05:05:15.536318] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:04.901 "name": "raid_bdev1", 00:16:04.901 "aliases": [ 00:16:04.901 "74e96a9e-2013-4f39-8f63-00742479d840" 00:16:04.901 ], 00:16:04.901 "product_name": "Raid Volume", 00:16:04.901 "block_size": 4128, 00:16:04.901 "num_blocks": 7936, 00:16:04.901 "uuid": "74e96a9e-2013-4f39-8f63-00742479d840", 00:16:04.901 "md_size": 32, 00:16:04.901 "md_interleave": true, 00:16:04.901 "dif_type": 0, 00:16:04.901 "assigned_rate_limits": { 00:16:04.901 "rw_ios_per_sec": 0, 00:16:04.901 "rw_mbytes_per_sec": 0, 00:16:04.901 "r_mbytes_per_sec": 0, 00:16:04.901 "w_mbytes_per_sec": 0 00:16:04.901 }, 00:16:04.901 "claimed": false, 00:16:04.901 "zoned": false, 00:16:04.901 "supported_io_types": { 00:16:04.901 "read": true, 00:16:04.901 "write": true, 00:16:04.901 "unmap": false, 00:16:04.901 "flush": false, 00:16:04.901 "reset": true, 00:16:04.901 "nvme_admin": false, 00:16:04.901 "nvme_io": false, 00:16:04.901 "nvme_io_md": false, 00:16:04.901 "write_zeroes": true, 00:16:04.901 "zcopy": false, 00:16:04.901 "get_zone_info": false, 00:16:04.901 "zone_management": false, 00:16:04.901 "zone_append": false, 00:16:04.901 "compare": false, 00:16:04.901 "compare_and_write": false, 00:16:04.901 "abort": false, 00:16:04.901 "seek_hole": false, 00:16:04.901 "seek_data": false, 00:16:04.901 "copy": false, 00:16:04.901 "nvme_iov_md": false 00:16:04.901 }, 00:16:04.901 "memory_domains": [ 00:16:04.901 { 00:16:04.901 "dma_device_id": "system", 00:16:04.901 "dma_device_type": 1 00:16:04.901 }, 00:16:04.901 { 00:16:04.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.901 "dma_device_type": 2 00:16:04.901 }, 00:16:04.901 { 00:16:04.901 "dma_device_id": "system", 00:16:04.901 "dma_device_type": 1 00:16:04.901 }, 00:16:04.901 { 00:16:04.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.901 "dma_device_type": 2 00:16:04.901 } 00:16:04.901 ], 00:16:04.901 "driver_specific": { 00:16:04.901 "raid": { 00:16:04.901 "uuid": "74e96a9e-2013-4f39-8f63-00742479d840", 00:16:04.901 "strip_size_kb": 0, 00:16:04.901 "state": "online", 00:16:04.901 "raid_level": "raid1", 00:16:04.901 "superblock": true, 00:16:04.901 "num_base_bdevs": 2, 00:16:04.901 "num_base_bdevs_discovered": 2, 00:16:04.901 "num_base_bdevs_operational": 2, 00:16:04.901 "base_bdevs_list": [ 00:16:04.901 { 00:16:04.901 "name": "pt1", 00:16:04.901 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:04.901 "is_configured": true, 00:16:04.901 "data_offset": 256, 00:16:04.901 "data_size": 7936 00:16:04.901 }, 00:16:04.901 { 00:16:04.901 "name": "pt2", 00:16:04.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.901 "is_configured": true, 00:16:04.901 "data_offset": 256, 00:16:04.901 "data_size": 7936 00:16:04.901 } 00:16:04.901 ] 00:16:04.901 } 00:16:04.901 } 00:16:04.901 }' 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:04.901 pt2' 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:04.901 [2024-12-14 05:05:15.731791] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=74e96a9e-2013-4f39-8f63-00742479d840 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 74e96a9e-2013-4f39-8f63-00742479d840 ']' 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.901 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.174 [2024-12-14 05:05:15.779501] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.174 [2024-12-14 05:05:15.779580] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.174 [2024-12-14 05:05:15.779702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.174 [2024-12-14 05:05:15.779804] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.174 [2024-12-14 05:05:15.779876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.174 [2024-12-14 05:05:15.919305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:05.174 [2024-12-14 05:05:15.921509] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:05.174 [2024-12-14 05:05:15.921645] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:05.174 [2024-12-14 05:05:15.921744] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:05.174 [2024-12-14 05:05:15.921810] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.174 [2024-12-14 05:05:15.921835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:16:05.174 request: 00:16:05.174 { 00:16:05.174 "name": "raid_bdev1", 00:16:05.174 "raid_level": "raid1", 00:16:05.174 "base_bdevs": [ 00:16:05.174 "malloc1", 00:16:05.174 "malloc2" 00:16:05.174 ], 00:16:05.174 "superblock": false, 00:16:05.174 "method": "bdev_raid_create", 00:16:05.174 "req_id": 1 00:16:05.174 } 00:16:05.174 Got JSON-RPC error response 00:16:05.174 response: 00:16:05.174 { 00:16:05.174 "code": -17, 00:16:05.174 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:05.174 } 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.174 [2024-12-14 05:05:15.987135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:05.174 [2024-12-14 05:05:15.987259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.174 [2024-12-14 05:05:15.987303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:05.174 [2024-12-14 05:05:15.987358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.174 [2024-12-14 05:05:15.989564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.174 [2024-12-14 05:05:15.989639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:05.174 [2024-12-14 05:05:15.989729] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:05.174 [2024-12-14 05:05:15.989806] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:05.174 pt1 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.174 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.175 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:05.175 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.175 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.175 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.175 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.175 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.175 05:05:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.175 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.175 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.175 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.175 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.175 "name": "raid_bdev1", 00:16:05.175 "uuid": "74e96a9e-2013-4f39-8f63-00742479d840", 00:16:05.175 "strip_size_kb": 0, 00:16:05.175 "state": "configuring", 00:16:05.175 "raid_level": "raid1", 00:16:05.175 "superblock": true, 00:16:05.175 "num_base_bdevs": 2, 00:16:05.175 "num_base_bdevs_discovered": 1, 00:16:05.175 "num_base_bdevs_operational": 2, 00:16:05.175 "base_bdevs_list": [ 00:16:05.175 { 00:16:05.175 "name": "pt1", 00:16:05.175 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:05.175 "is_configured": true, 00:16:05.175 "data_offset": 256, 00:16:05.175 "data_size": 7936 00:16:05.175 }, 00:16:05.175 { 00:16:05.175 "name": null, 00:16:05.175 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.175 "is_configured": false, 00:16:05.175 "data_offset": 256, 00:16:05.175 "data_size": 7936 00:16:05.175 } 00:16:05.175 ] 00:16:05.175 }' 00:16:05.175 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.175 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.754 [2024-12-14 05:05:16.470326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:05.754 [2024-12-14 05:05:16.470469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.754 [2024-12-14 05:05:16.470519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:05.754 [2024-12-14 05:05:16.470556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.754 [2024-12-14 05:05:16.470754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.754 [2024-12-14 05:05:16.470804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:05.754 [2024-12-14 05:05:16.470892] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:05.754 [2024-12-14 05:05:16.470944] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:05.754 [2024-12-14 05:05:16.471067] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:05.754 [2024-12-14 05:05:16.471123] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:05.754 [2024-12-14 05:05:16.471260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:05.754 [2024-12-14 05:05:16.471377] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:05.754 [2024-12-14 05:05:16.471429] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:16:05.754 [2024-12-14 05:05:16.471544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.754 pt2 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.754 "name": "raid_bdev1", 00:16:05.754 "uuid": "74e96a9e-2013-4f39-8f63-00742479d840", 00:16:05.754 "strip_size_kb": 0, 00:16:05.754 "state": "online", 00:16:05.754 "raid_level": "raid1", 00:16:05.754 "superblock": true, 00:16:05.754 "num_base_bdevs": 2, 00:16:05.754 "num_base_bdevs_discovered": 2, 00:16:05.754 "num_base_bdevs_operational": 2, 00:16:05.754 "base_bdevs_list": [ 00:16:05.754 { 00:16:05.754 "name": "pt1", 00:16:05.754 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:05.754 "is_configured": true, 00:16:05.754 "data_offset": 256, 00:16:05.754 "data_size": 7936 00:16:05.754 }, 00:16:05.754 { 00:16:05.754 "name": "pt2", 00:16:05.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.754 "is_configured": true, 00:16:05.754 "data_offset": 256, 00:16:05.754 "data_size": 7936 00:16:05.754 } 00:16:05.754 ] 00:16:05.754 }' 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.754 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.324 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:06.324 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:06.324 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:06.324 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:06.324 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:06.324 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:06.324 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:06.324 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:06.324 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.324 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.324 [2024-12-14 05:05:16.949710] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.324 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.324 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:06.324 "name": "raid_bdev1", 00:16:06.324 "aliases": [ 00:16:06.324 "74e96a9e-2013-4f39-8f63-00742479d840" 00:16:06.324 ], 00:16:06.324 "product_name": "Raid Volume", 00:16:06.324 "block_size": 4128, 00:16:06.324 "num_blocks": 7936, 00:16:06.324 "uuid": "74e96a9e-2013-4f39-8f63-00742479d840", 00:16:06.324 "md_size": 32, 00:16:06.324 "md_interleave": true, 00:16:06.324 "dif_type": 0, 00:16:06.324 "assigned_rate_limits": { 00:16:06.324 "rw_ios_per_sec": 0, 00:16:06.324 "rw_mbytes_per_sec": 0, 00:16:06.324 "r_mbytes_per_sec": 0, 00:16:06.324 "w_mbytes_per_sec": 0 00:16:06.324 }, 00:16:06.324 "claimed": false, 00:16:06.324 "zoned": false, 00:16:06.324 "supported_io_types": { 00:16:06.324 "read": true, 00:16:06.324 "write": true, 00:16:06.324 "unmap": false, 00:16:06.324 "flush": false, 00:16:06.324 "reset": true, 00:16:06.324 "nvme_admin": false, 00:16:06.324 "nvme_io": false, 00:16:06.324 "nvme_io_md": false, 00:16:06.324 "write_zeroes": true, 00:16:06.324 "zcopy": false, 00:16:06.324 "get_zone_info": false, 00:16:06.324 "zone_management": false, 00:16:06.324 "zone_append": false, 00:16:06.324 "compare": false, 00:16:06.324 "compare_and_write": false, 00:16:06.324 "abort": false, 00:16:06.324 "seek_hole": false, 00:16:06.324 "seek_data": false, 00:16:06.324 "copy": false, 00:16:06.324 "nvme_iov_md": false 00:16:06.324 }, 00:16:06.324 "memory_domains": [ 00:16:06.324 { 00:16:06.324 "dma_device_id": "system", 00:16:06.324 "dma_device_type": 1 00:16:06.324 }, 00:16:06.324 { 00:16:06.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.324 "dma_device_type": 2 00:16:06.324 }, 00:16:06.324 { 00:16:06.324 "dma_device_id": "system", 00:16:06.324 "dma_device_type": 1 00:16:06.324 }, 00:16:06.324 { 00:16:06.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.324 "dma_device_type": 2 00:16:06.324 } 00:16:06.324 ], 00:16:06.324 "driver_specific": { 00:16:06.324 "raid": { 00:16:06.324 "uuid": "74e96a9e-2013-4f39-8f63-00742479d840", 00:16:06.324 "strip_size_kb": 0, 00:16:06.324 "state": "online", 00:16:06.324 "raid_level": "raid1", 00:16:06.324 "superblock": true, 00:16:06.324 "num_base_bdevs": 2, 00:16:06.324 "num_base_bdevs_discovered": 2, 00:16:06.324 "num_base_bdevs_operational": 2, 00:16:06.324 "base_bdevs_list": [ 00:16:06.324 { 00:16:06.324 "name": "pt1", 00:16:06.324 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:06.324 "is_configured": true, 00:16:06.324 "data_offset": 256, 00:16:06.324 "data_size": 7936 00:16:06.324 }, 00:16:06.324 { 00:16:06.324 "name": "pt2", 00:16:06.324 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:06.324 "is_configured": true, 00:16:06.324 "data_offset": 256, 00:16:06.324 "data_size": 7936 00:16:06.324 } 00:16:06.324 ] 00:16:06.324 } 00:16:06.324 } 00:16:06.324 }' 00:16:06.325 05:05:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:06.325 pt2' 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.325 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.325 [2024-12-14 05:05:17.193291] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 74e96a9e-2013-4f39-8f63-00742479d840 '!=' 74e96a9e-2013-4f39-8f63-00742479d840 ']' 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.585 [2024-12-14 05:05:17.225016] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.585 "name": "raid_bdev1", 00:16:06.585 "uuid": "74e96a9e-2013-4f39-8f63-00742479d840", 00:16:06.585 "strip_size_kb": 0, 00:16:06.585 "state": "online", 00:16:06.585 "raid_level": "raid1", 00:16:06.585 "superblock": true, 00:16:06.585 "num_base_bdevs": 2, 00:16:06.585 "num_base_bdevs_discovered": 1, 00:16:06.585 "num_base_bdevs_operational": 1, 00:16:06.585 "base_bdevs_list": [ 00:16:06.585 { 00:16:06.585 "name": null, 00:16:06.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.585 "is_configured": false, 00:16:06.585 "data_offset": 0, 00:16:06.585 "data_size": 7936 00:16:06.585 }, 00:16:06.585 { 00:16:06.585 "name": "pt2", 00:16:06.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:06.585 "is_configured": true, 00:16:06.585 "data_offset": 256, 00:16:06.585 "data_size": 7936 00:16:06.585 } 00:16:06.585 ] 00:16:06.585 }' 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.585 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.844 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:06.845 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.845 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.845 [2024-12-14 05:05:17.704265] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.845 [2024-12-14 05:05:17.704355] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.845 [2024-12-14 05:05:17.704480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.845 [2024-12-14 05:05:17.704558] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.845 [2024-12-14 05:05:17.704624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:16:06.845 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.845 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.845 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:06.845 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.845 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.845 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.105 [2024-12-14 05:05:17.776141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:07.105 [2024-12-14 05:05:17.776217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.105 [2024-12-14 05:05:17.776241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:07.105 [2024-12-14 05:05:17.776254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.105 [2024-12-14 05:05:17.778485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.105 [2024-12-14 05:05:17.778526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:07.105 [2024-12-14 05:05:17.778586] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:07.105 [2024-12-14 05:05:17.778624] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:07.105 [2024-12-14 05:05:17.778695] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:07.105 [2024-12-14 05:05:17.778704] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:07.105 [2024-12-14 05:05:17.778822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:07.105 [2024-12-14 05:05:17.778890] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:07.105 [2024-12-14 05:05:17.778902] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:07.105 [2024-12-14 05:05:17.778960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.105 pt2 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.105 "name": "raid_bdev1", 00:16:07.105 "uuid": "74e96a9e-2013-4f39-8f63-00742479d840", 00:16:07.105 "strip_size_kb": 0, 00:16:07.105 "state": "online", 00:16:07.105 "raid_level": "raid1", 00:16:07.105 "superblock": true, 00:16:07.105 "num_base_bdevs": 2, 00:16:07.105 "num_base_bdevs_discovered": 1, 00:16:07.105 "num_base_bdevs_operational": 1, 00:16:07.105 "base_bdevs_list": [ 00:16:07.105 { 00:16:07.105 "name": null, 00:16:07.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.105 "is_configured": false, 00:16:07.105 "data_offset": 256, 00:16:07.105 "data_size": 7936 00:16:07.105 }, 00:16:07.105 { 00:16:07.105 "name": "pt2", 00:16:07.105 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:07.105 "is_configured": true, 00:16:07.105 "data_offset": 256, 00:16:07.105 "data_size": 7936 00:16:07.105 } 00:16:07.105 ] 00:16:07.105 }' 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.105 05:05:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.365 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:07.365 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.365 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.365 [2024-12-14 05:05:18.235435] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:07.365 [2024-12-14 05:05:18.235511] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:07.365 [2024-12-14 05:05:18.235607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.365 [2024-12-14 05:05:18.235665] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:07.365 [2024-12-14 05:05:18.235727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:07.365 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.365 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.365 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:07.365 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.625 [2024-12-14 05:05:18.299406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:07.625 [2024-12-14 05:05:18.299511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.625 [2024-12-14 05:05:18.299552] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:07.625 [2024-12-14 05:05:18.299610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.625 [2024-12-14 05:05:18.301785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.625 [2024-12-14 05:05:18.301863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:07.625 [2024-12-14 05:05:18.301952] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:07.625 [2024-12-14 05:05:18.302013] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:07.625 [2024-12-14 05:05:18.302131] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:07.625 [2024-12-14 05:05:18.302218] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:07.625 [2024-12-14 05:05:18.302273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:16:07.625 [2024-12-14 05:05:18.302355] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:07.625 [2024-12-14 05:05:18.302475] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:07.625 [2024-12-14 05:05:18.302526] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:07.625 [2024-12-14 05:05:18.302613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:07.625 [2024-12-14 05:05:18.302714] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:07.625 [2024-12-14 05:05:18.302756] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:07.625 [2024-12-14 05:05:18.302869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.625 pt1 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.625 "name": "raid_bdev1", 00:16:07.625 "uuid": "74e96a9e-2013-4f39-8f63-00742479d840", 00:16:07.625 "strip_size_kb": 0, 00:16:07.625 "state": "online", 00:16:07.625 "raid_level": "raid1", 00:16:07.625 "superblock": true, 00:16:07.625 "num_base_bdevs": 2, 00:16:07.625 "num_base_bdevs_discovered": 1, 00:16:07.625 "num_base_bdevs_operational": 1, 00:16:07.625 "base_bdevs_list": [ 00:16:07.625 { 00:16:07.625 "name": null, 00:16:07.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.625 "is_configured": false, 00:16:07.625 "data_offset": 256, 00:16:07.625 "data_size": 7936 00:16:07.625 }, 00:16:07.625 { 00:16:07.625 "name": "pt2", 00:16:07.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:07.625 "is_configured": true, 00:16:07.625 "data_offset": 256, 00:16:07.625 "data_size": 7936 00:16:07.625 } 00:16:07.625 ] 00:16:07.625 }' 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.625 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:08.194 [2024-12-14 05:05:18.806821] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 74e96a9e-2013-4f39-8f63-00742479d840 '!=' 74e96a9e-2013-4f39-8f63-00742479d840 ']' 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 99031 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99031 ']' 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99031 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99031 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:08.194 killing process with pid 99031 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99031' 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 99031 00:16:08.194 [2024-12-14 05:05:18.888053] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:08.194 [2024-12-14 05:05:18.888135] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.194 [2024-12-14 05:05:18.888204] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:08.194 [2024-12-14 05:05:18.888215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:08.194 05:05:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 99031 00:16:08.194 [2024-12-14 05:05:18.930409] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:08.453 05:05:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:16:08.453 00:16:08.453 real 0m5.234s 00:16:08.453 user 0m8.313s 00:16:08.453 sys 0m1.191s 00:16:08.453 05:05:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:08.453 ************************************ 00:16:08.453 END TEST raid_superblock_test_md_interleaved 00:16:08.453 ************************************ 00:16:08.453 05:05:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:08.713 05:05:19 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:16:08.713 05:05:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:08.713 05:05:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:08.713 05:05:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:08.713 ************************************ 00:16:08.713 START TEST raid_rebuild_test_sb_md_interleaved 00:16:08.713 ************************************ 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=99344 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 99344 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99344 ']' 00:16:08.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:08.713 05:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:08.713 [2024-12-14 05:05:19.510192] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:08.713 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:08.713 Zero copy mechanism will not be used. 00:16:08.713 [2024-12-14 05:05:19.510460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99344 ] 00:16:08.973 [2024-12-14 05:05:19.677195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.973 [2024-12-14 05:05:19.756866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.973 [2024-12-14 05:05:19.835174] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.973 [2024-12-14 05:05:19.835231] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.542 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:09.542 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:09.542 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:09.542 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:16:09.542 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.542 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:09.542 BaseBdev1_malloc 00:16:09.542 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.542 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:09.542 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.542 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:09.542 [2024-12-14 05:05:20.339596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:09.542 [2024-12-14 05:05:20.339788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.542 [2024-12-14 05:05:20.339856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:09.542 [2024-12-14 05:05:20.339901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.542 [2024-12-14 05:05:20.342142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.542 [2024-12-14 05:05:20.342257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:09.542 BaseBdev1 00:16:09.542 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.542 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:09.542 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:16:09.542 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.542 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:09.542 BaseBdev2_malloc 00:16:09.543 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.543 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:09.543 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.543 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:09.543 [2024-12-14 05:05:20.390903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:09.543 [2024-12-14 05:05:20.391186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.543 [2024-12-14 05:05:20.391261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:09.543 [2024-12-14 05:05:20.391293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.543 [2024-12-14 05:05:20.395814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.543 [2024-12-14 05:05:20.395876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:09.543 BaseBdev2 00:16:09.543 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.543 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:16:09.543 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.543 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:09.543 spare_malloc 00:16:09.543 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.543 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:09.543 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:09.803 spare_delay 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:09.803 [2024-12-14 05:05:20.440876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:09.803 [2024-12-14 05:05:20.441000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.803 [2024-12-14 05:05:20.441049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:09.803 [2024-12-14 05:05:20.441060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.803 [2024-12-14 05:05:20.443256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.803 [2024-12-14 05:05:20.443292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:09.803 spare 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:09.803 [2024-12-14 05:05:20.452882] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.803 [2024-12-14 05:05:20.455022] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:09.803 [2024-12-14 05:05:20.455277] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:09.803 [2024-12-14 05:05:20.455371] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:09.803 [2024-12-14 05:05:20.455519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:09.803 [2024-12-14 05:05:20.455628] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:09.803 [2024-12-14 05:05:20.455673] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:09.803 [2024-12-14 05:05:20.455800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.803 "name": "raid_bdev1", 00:16:09.803 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:09.803 "strip_size_kb": 0, 00:16:09.803 "state": "online", 00:16:09.803 "raid_level": "raid1", 00:16:09.803 "superblock": true, 00:16:09.803 "num_base_bdevs": 2, 00:16:09.803 "num_base_bdevs_discovered": 2, 00:16:09.803 "num_base_bdevs_operational": 2, 00:16:09.803 "base_bdevs_list": [ 00:16:09.803 { 00:16:09.803 "name": "BaseBdev1", 00:16:09.803 "uuid": "037fc087-9df4-568d-ab74-cafa1ff4c19b", 00:16:09.803 "is_configured": true, 00:16:09.803 "data_offset": 256, 00:16:09.803 "data_size": 7936 00:16:09.803 }, 00:16:09.803 { 00:16:09.803 "name": "BaseBdev2", 00:16:09.803 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:09.803 "is_configured": true, 00:16:09.803 "data_offset": 256, 00:16:09.803 "data_size": 7936 00:16:09.803 } 00:16:09.803 ] 00:16:09.803 }' 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.803 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.062 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:10.062 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:10.062 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.062 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.062 [2024-12-14 05:05:20.936359] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:10.322 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.322 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:10.322 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.322 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.322 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.322 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:10.322 05:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.322 [2024-12-14 05:05:21.035819] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.322 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.322 "name": "raid_bdev1", 00:16:10.323 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:10.323 "strip_size_kb": 0, 00:16:10.323 "state": "online", 00:16:10.323 "raid_level": "raid1", 00:16:10.323 "superblock": true, 00:16:10.323 "num_base_bdevs": 2, 00:16:10.323 "num_base_bdevs_discovered": 1, 00:16:10.323 "num_base_bdevs_operational": 1, 00:16:10.323 "base_bdevs_list": [ 00:16:10.323 { 00:16:10.323 "name": null, 00:16:10.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.323 "is_configured": false, 00:16:10.323 "data_offset": 0, 00:16:10.323 "data_size": 7936 00:16:10.323 }, 00:16:10.323 { 00:16:10.323 "name": "BaseBdev2", 00:16:10.323 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:10.323 "is_configured": true, 00:16:10.323 "data_offset": 256, 00:16:10.323 "data_size": 7936 00:16:10.323 } 00:16:10.323 ] 00:16:10.323 }' 00:16:10.323 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.323 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.892 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:10.892 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.892 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.892 [2024-12-14 05:05:21.511048] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:10.892 [2024-12-14 05:05:21.516273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:10.892 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.892 05:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:10.892 [2024-12-14 05:05:21.518569] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:11.830 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.830 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.830 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.830 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.830 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.830 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.830 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.830 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.830 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:11.830 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.830 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.830 "name": "raid_bdev1", 00:16:11.830 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:11.830 "strip_size_kb": 0, 00:16:11.830 "state": "online", 00:16:11.830 "raid_level": "raid1", 00:16:11.830 "superblock": true, 00:16:11.830 "num_base_bdevs": 2, 00:16:11.830 "num_base_bdevs_discovered": 2, 00:16:11.830 "num_base_bdevs_operational": 2, 00:16:11.830 "process": { 00:16:11.830 "type": "rebuild", 00:16:11.830 "target": "spare", 00:16:11.830 "progress": { 00:16:11.830 "blocks": 2560, 00:16:11.830 "percent": 32 00:16:11.830 } 00:16:11.830 }, 00:16:11.830 "base_bdevs_list": [ 00:16:11.830 { 00:16:11.830 "name": "spare", 00:16:11.830 "uuid": "cc7f7d7f-fc88-5f86-94cd-a1735e8c15ed", 00:16:11.830 "is_configured": true, 00:16:11.830 "data_offset": 256, 00:16:11.830 "data_size": 7936 00:16:11.830 }, 00:16:11.830 { 00:16:11.830 "name": "BaseBdev2", 00:16:11.830 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:11.830 "is_configured": true, 00:16:11.830 "data_offset": 256, 00:16:11.830 "data_size": 7936 00:16:11.830 } 00:16:11.830 ] 00:16:11.830 }' 00:16:11.830 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.830 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.830 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.830 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.830 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:11.830 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.830 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:11.830 [2024-12-14 05:05:22.682635] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.090 [2024-12-14 05:05:22.727468] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:12.090 [2024-12-14 05:05:22.727559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.090 [2024-12-14 05:05:22.727582] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.090 [2024-12-14 05:05:22.727592] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:12.090 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.090 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:12.090 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.090 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.090 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.090 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.090 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:12.090 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.090 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.090 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.090 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.090 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.090 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.090 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.090 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.090 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.090 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.090 "name": "raid_bdev1", 00:16:12.090 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:12.090 "strip_size_kb": 0, 00:16:12.090 "state": "online", 00:16:12.090 "raid_level": "raid1", 00:16:12.090 "superblock": true, 00:16:12.090 "num_base_bdevs": 2, 00:16:12.090 "num_base_bdevs_discovered": 1, 00:16:12.090 "num_base_bdevs_operational": 1, 00:16:12.090 "base_bdevs_list": [ 00:16:12.090 { 00:16:12.090 "name": null, 00:16:12.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.090 "is_configured": false, 00:16:12.090 "data_offset": 0, 00:16:12.090 "data_size": 7936 00:16:12.090 }, 00:16:12.090 { 00:16:12.090 "name": "BaseBdev2", 00:16:12.090 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:12.090 "is_configured": true, 00:16:12.090 "data_offset": 256, 00:16:12.090 "data_size": 7936 00:16:12.090 } 00:16:12.090 ] 00:16:12.090 }' 00:16:12.090 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.090 05:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.349 05:05:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.349 05:05:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.349 05:05:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.349 05:05:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.349 05:05:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.350 05:05:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.350 05:05:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.350 05:05:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.350 05:05:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.350 05:05:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.350 05:05:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.350 "name": "raid_bdev1", 00:16:12.350 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:12.350 "strip_size_kb": 0, 00:16:12.350 "state": "online", 00:16:12.350 "raid_level": "raid1", 00:16:12.350 "superblock": true, 00:16:12.350 "num_base_bdevs": 2, 00:16:12.350 "num_base_bdevs_discovered": 1, 00:16:12.350 "num_base_bdevs_operational": 1, 00:16:12.350 "base_bdevs_list": [ 00:16:12.350 { 00:16:12.350 "name": null, 00:16:12.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.350 "is_configured": false, 00:16:12.350 "data_offset": 0, 00:16:12.350 "data_size": 7936 00:16:12.350 }, 00:16:12.350 { 00:16:12.350 "name": "BaseBdev2", 00:16:12.350 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:12.350 "is_configured": true, 00:16:12.350 "data_offset": 256, 00:16:12.350 "data_size": 7936 00:16:12.350 } 00:16:12.350 ] 00:16:12.350 }' 00:16:12.350 05:05:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.609 05:05:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.609 05:05:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.609 05:05:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.609 05:05:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:12.609 05:05:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.609 05:05:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.609 [2024-12-14 05:05:23.276896] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:12.609 [2024-12-14 05:05:23.281895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:12.609 05:05:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.609 05:05:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:12.609 [2024-12-14 05:05:23.284215] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:13.547 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.547 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.547 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.547 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.547 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.547 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.547 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.547 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.547 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.547 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.547 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.547 "name": "raid_bdev1", 00:16:13.547 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:13.547 "strip_size_kb": 0, 00:16:13.547 "state": "online", 00:16:13.547 "raid_level": "raid1", 00:16:13.547 "superblock": true, 00:16:13.547 "num_base_bdevs": 2, 00:16:13.547 "num_base_bdevs_discovered": 2, 00:16:13.547 "num_base_bdevs_operational": 2, 00:16:13.547 "process": { 00:16:13.547 "type": "rebuild", 00:16:13.547 "target": "spare", 00:16:13.547 "progress": { 00:16:13.548 "blocks": 2560, 00:16:13.548 "percent": 32 00:16:13.548 } 00:16:13.548 }, 00:16:13.548 "base_bdevs_list": [ 00:16:13.548 { 00:16:13.548 "name": "spare", 00:16:13.548 "uuid": "cc7f7d7f-fc88-5f86-94cd-a1735e8c15ed", 00:16:13.548 "is_configured": true, 00:16:13.548 "data_offset": 256, 00:16:13.548 "data_size": 7936 00:16:13.548 }, 00:16:13.548 { 00:16:13.548 "name": "BaseBdev2", 00:16:13.548 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:13.548 "is_configured": true, 00:16:13.548 "data_offset": 256, 00:16:13.548 "data_size": 7936 00:16:13.548 } 00:16:13.548 ] 00:16:13.548 }' 00:16:13.548 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.548 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.548 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.548 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.548 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:13.548 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:13.548 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:13.548 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:13.548 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:13.548 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:13.548 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=615 00:16:13.548 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.548 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.548 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.548 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.548 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.548 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.548 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.548 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.548 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.548 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.808 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.808 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.808 "name": "raid_bdev1", 00:16:13.808 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:13.808 "strip_size_kb": 0, 00:16:13.808 "state": "online", 00:16:13.808 "raid_level": "raid1", 00:16:13.808 "superblock": true, 00:16:13.808 "num_base_bdevs": 2, 00:16:13.808 "num_base_bdevs_discovered": 2, 00:16:13.808 "num_base_bdevs_operational": 2, 00:16:13.808 "process": { 00:16:13.808 "type": "rebuild", 00:16:13.808 "target": "spare", 00:16:13.808 "progress": { 00:16:13.808 "blocks": 2816, 00:16:13.808 "percent": 35 00:16:13.808 } 00:16:13.808 }, 00:16:13.808 "base_bdevs_list": [ 00:16:13.808 { 00:16:13.808 "name": "spare", 00:16:13.808 "uuid": "cc7f7d7f-fc88-5f86-94cd-a1735e8c15ed", 00:16:13.808 "is_configured": true, 00:16:13.808 "data_offset": 256, 00:16:13.808 "data_size": 7936 00:16:13.808 }, 00:16:13.808 { 00:16:13.808 "name": "BaseBdev2", 00:16:13.808 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:13.808 "is_configured": true, 00:16:13.808 "data_offset": 256, 00:16:13.808 "data_size": 7936 00:16:13.808 } 00:16:13.808 ] 00:16:13.808 }' 00:16:13.808 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.808 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.808 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.808 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.808 05:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:14.747 05:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:14.747 05:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.747 05:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.747 05:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.747 05:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.747 05:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.747 05:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.747 05:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.747 05:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.747 05:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.747 05:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.747 05:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.747 "name": "raid_bdev1", 00:16:14.747 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:14.747 "strip_size_kb": 0, 00:16:14.747 "state": "online", 00:16:14.747 "raid_level": "raid1", 00:16:14.747 "superblock": true, 00:16:14.747 "num_base_bdevs": 2, 00:16:14.747 "num_base_bdevs_discovered": 2, 00:16:14.747 "num_base_bdevs_operational": 2, 00:16:14.747 "process": { 00:16:14.747 "type": "rebuild", 00:16:14.747 "target": "spare", 00:16:14.747 "progress": { 00:16:14.747 "blocks": 5632, 00:16:14.747 "percent": 70 00:16:14.747 } 00:16:14.747 }, 00:16:14.747 "base_bdevs_list": [ 00:16:14.747 { 00:16:14.747 "name": "spare", 00:16:14.747 "uuid": "cc7f7d7f-fc88-5f86-94cd-a1735e8c15ed", 00:16:14.747 "is_configured": true, 00:16:14.747 "data_offset": 256, 00:16:14.747 "data_size": 7936 00:16:14.747 }, 00:16:14.747 { 00:16:14.747 "name": "BaseBdev2", 00:16:14.747 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:14.747 "is_configured": true, 00:16:14.747 "data_offset": 256, 00:16:14.747 "data_size": 7936 00:16:14.747 } 00:16:14.747 ] 00:16:14.747 }' 00:16:14.747 05:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.006 05:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.006 05:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.006 05:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.006 05:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.574 [2024-12-14 05:05:26.404845] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:15.574 [2024-12-14 05:05:26.405005] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:15.574 [2024-12-14 05:05:26.405153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.143 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.143 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.143 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.143 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.143 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.143 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.143 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.143 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.143 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.143 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.143 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.144 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.144 "name": "raid_bdev1", 00:16:16.144 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:16.144 "strip_size_kb": 0, 00:16:16.144 "state": "online", 00:16:16.144 "raid_level": "raid1", 00:16:16.144 "superblock": true, 00:16:16.144 "num_base_bdevs": 2, 00:16:16.144 "num_base_bdevs_discovered": 2, 00:16:16.144 "num_base_bdevs_operational": 2, 00:16:16.144 "base_bdevs_list": [ 00:16:16.144 { 00:16:16.144 "name": "spare", 00:16:16.144 "uuid": "cc7f7d7f-fc88-5f86-94cd-a1735e8c15ed", 00:16:16.144 "is_configured": true, 00:16:16.144 "data_offset": 256, 00:16:16.144 "data_size": 7936 00:16:16.144 }, 00:16:16.144 { 00:16:16.144 "name": "BaseBdev2", 00:16:16.144 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:16.144 "is_configured": true, 00:16:16.144 "data_offset": 256, 00:16:16.144 "data_size": 7936 00:16:16.144 } 00:16:16.144 ] 00:16:16.144 }' 00:16:16.144 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.144 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:16.144 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.144 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:16.144 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:16:16.144 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:16.144 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.144 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:16.144 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:16.144 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.144 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.144 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.144 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.144 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.144 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.144 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.144 "name": "raid_bdev1", 00:16:16.144 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:16.144 "strip_size_kb": 0, 00:16:16.144 "state": "online", 00:16:16.144 "raid_level": "raid1", 00:16:16.144 "superblock": true, 00:16:16.144 "num_base_bdevs": 2, 00:16:16.144 "num_base_bdevs_discovered": 2, 00:16:16.144 "num_base_bdevs_operational": 2, 00:16:16.144 "base_bdevs_list": [ 00:16:16.144 { 00:16:16.144 "name": "spare", 00:16:16.144 "uuid": "cc7f7d7f-fc88-5f86-94cd-a1735e8c15ed", 00:16:16.144 "is_configured": true, 00:16:16.144 "data_offset": 256, 00:16:16.144 "data_size": 7936 00:16:16.144 }, 00:16:16.144 { 00:16:16.144 "name": "BaseBdev2", 00:16:16.144 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:16.144 "is_configured": true, 00:16:16.144 "data_offset": 256, 00:16:16.144 "data_size": 7936 00:16:16.144 } 00:16:16.144 ] 00:16:16.144 }' 00:16:16.144 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.144 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:16.144 05:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.144 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:16.144 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:16.144 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.144 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.144 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.144 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.144 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:16.144 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.144 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.144 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.144 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.144 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.144 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.144 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.144 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.403 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.403 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.403 "name": "raid_bdev1", 00:16:16.403 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:16.403 "strip_size_kb": 0, 00:16:16.403 "state": "online", 00:16:16.403 "raid_level": "raid1", 00:16:16.403 "superblock": true, 00:16:16.403 "num_base_bdevs": 2, 00:16:16.403 "num_base_bdevs_discovered": 2, 00:16:16.403 "num_base_bdevs_operational": 2, 00:16:16.403 "base_bdevs_list": [ 00:16:16.403 { 00:16:16.403 "name": "spare", 00:16:16.403 "uuid": "cc7f7d7f-fc88-5f86-94cd-a1735e8c15ed", 00:16:16.403 "is_configured": true, 00:16:16.403 "data_offset": 256, 00:16:16.403 "data_size": 7936 00:16:16.403 }, 00:16:16.403 { 00:16:16.403 "name": "BaseBdev2", 00:16:16.403 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:16.403 "is_configured": true, 00:16:16.403 "data_offset": 256, 00:16:16.403 "data_size": 7936 00:16:16.403 } 00:16:16.403 ] 00:16:16.403 }' 00:16:16.403 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.403 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.663 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:16.663 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.663 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.663 [2024-12-14 05:05:27.505114] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:16.663 [2024-12-14 05:05:27.505234] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:16.663 [2024-12-14 05:05:27.505370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.663 [2024-12-14 05:05:27.505486] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.663 [2024-12-14 05:05:27.505566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:16.663 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.663 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.663 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:16:16.663 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.663 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.663 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.923 [2024-12-14 05:05:27.576982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:16.923 [2024-12-14 05:05:27.577120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.923 [2024-12-14 05:05:27.577174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:16.923 [2024-12-14 05:05:27.577217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.923 [2024-12-14 05:05:27.579541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.923 [2024-12-14 05:05:27.579631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:16.923 [2024-12-14 05:05:27.579721] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:16.923 [2024-12-14 05:05:27.579822] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:16.923 [2024-12-14 05:05:27.580003] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:16.923 spare 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.923 [2024-12-14 05:05:27.679955] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:16.923 [2024-12-14 05:05:27.680031] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:16.923 [2024-12-14 05:05:27.680198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:16.923 [2024-12-14 05:05:27.680326] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:16.923 [2024-12-14 05:05:27.680374] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:16.923 [2024-12-14 05:05:27.680505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.923 "name": "raid_bdev1", 00:16:16.923 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:16.923 "strip_size_kb": 0, 00:16:16.923 "state": "online", 00:16:16.923 "raid_level": "raid1", 00:16:16.923 "superblock": true, 00:16:16.923 "num_base_bdevs": 2, 00:16:16.923 "num_base_bdevs_discovered": 2, 00:16:16.923 "num_base_bdevs_operational": 2, 00:16:16.923 "base_bdevs_list": [ 00:16:16.923 { 00:16:16.923 "name": "spare", 00:16:16.923 "uuid": "cc7f7d7f-fc88-5f86-94cd-a1735e8c15ed", 00:16:16.923 "is_configured": true, 00:16:16.923 "data_offset": 256, 00:16:16.923 "data_size": 7936 00:16:16.923 }, 00:16:16.923 { 00:16:16.923 "name": "BaseBdev2", 00:16:16.923 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:16.923 "is_configured": true, 00:16:16.923 "data_offset": 256, 00:16:16.923 "data_size": 7936 00:16:16.923 } 00:16:16.923 ] 00:16:16.923 }' 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.923 05:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.493 "name": "raid_bdev1", 00:16:17.493 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:17.493 "strip_size_kb": 0, 00:16:17.493 "state": "online", 00:16:17.493 "raid_level": "raid1", 00:16:17.493 "superblock": true, 00:16:17.493 "num_base_bdevs": 2, 00:16:17.493 "num_base_bdevs_discovered": 2, 00:16:17.493 "num_base_bdevs_operational": 2, 00:16:17.493 "base_bdevs_list": [ 00:16:17.493 { 00:16:17.493 "name": "spare", 00:16:17.493 "uuid": "cc7f7d7f-fc88-5f86-94cd-a1735e8c15ed", 00:16:17.493 "is_configured": true, 00:16:17.493 "data_offset": 256, 00:16:17.493 "data_size": 7936 00:16:17.493 }, 00:16:17.493 { 00:16:17.493 "name": "BaseBdev2", 00:16:17.493 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:17.493 "is_configured": true, 00:16:17.493 "data_offset": 256, 00:16:17.493 "data_size": 7936 00:16:17.493 } 00:16:17.493 ] 00:16:17.493 }' 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.493 [2024-12-14 05:05:28.263825] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.493 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.493 "name": "raid_bdev1", 00:16:17.493 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:17.493 "strip_size_kb": 0, 00:16:17.493 "state": "online", 00:16:17.493 "raid_level": "raid1", 00:16:17.493 "superblock": true, 00:16:17.493 "num_base_bdevs": 2, 00:16:17.493 "num_base_bdevs_discovered": 1, 00:16:17.493 "num_base_bdevs_operational": 1, 00:16:17.493 "base_bdevs_list": [ 00:16:17.493 { 00:16:17.493 "name": null, 00:16:17.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.494 "is_configured": false, 00:16:17.494 "data_offset": 0, 00:16:17.494 "data_size": 7936 00:16:17.494 }, 00:16:17.494 { 00:16:17.494 "name": "BaseBdev2", 00:16:17.494 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:17.494 "is_configured": true, 00:16:17.494 "data_offset": 256, 00:16:17.494 "data_size": 7936 00:16:17.494 } 00:16:17.494 ] 00:16:17.494 }' 00:16:17.494 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.494 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.063 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:18.063 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.063 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.063 [2024-12-14 05:05:28.663267] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:18.063 [2024-12-14 05:05:28.663502] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:18.063 [2024-12-14 05:05:28.663537] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:18.063 [2024-12-14 05:05:28.663577] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:18.063 [2024-12-14 05:05:28.668571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:18.063 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.063 05:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:18.063 [2024-12-14 05:05:28.670743] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:19.002 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.002 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.002 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.002 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.002 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.002 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.002 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.002 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.002 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.002 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.002 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.002 "name": "raid_bdev1", 00:16:19.002 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:19.002 "strip_size_kb": 0, 00:16:19.002 "state": "online", 00:16:19.002 "raid_level": "raid1", 00:16:19.002 "superblock": true, 00:16:19.002 "num_base_bdevs": 2, 00:16:19.002 "num_base_bdevs_discovered": 2, 00:16:19.002 "num_base_bdevs_operational": 2, 00:16:19.002 "process": { 00:16:19.002 "type": "rebuild", 00:16:19.002 "target": "spare", 00:16:19.002 "progress": { 00:16:19.002 "blocks": 2560, 00:16:19.002 "percent": 32 00:16:19.002 } 00:16:19.002 }, 00:16:19.002 "base_bdevs_list": [ 00:16:19.002 { 00:16:19.002 "name": "spare", 00:16:19.002 "uuid": "cc7f7d7f-fc88-5f86-94cd-a1735e8c15ed", 00:16:19.002 "is_configured": true, 00:16:19.003 "data_offset": 256, 00:16:19.003 "data_size": 7936 00:16:19.003 }, 00:16:19.003 { 00:16:19.003 "name": "BaseBdev2", 00:16:19.003 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:19.003 "is_configured": true, 00:16:19.003 "data_offset": 256, 00:16:19.003 "data_size": 7936 00:16:19.003 } 00:16:19.003 ] 00:16:19.003 }' 00:16:19.003 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.003 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.003 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.003 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.003 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:19.003 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.003 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.003 [2024-12-14 05:05:29.807183] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:19.003 [2024-12-14 05:05:29.878674] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:19.003 [2024-12-14 05:05:29.878830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.003 [2024-12-14 05:05:29.878877] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:19.003 [2024-12-14 05:05:29.878920] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:19.262 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.262 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:19.262 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.262 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.262 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.262 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.262 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:19.262 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.262 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.262 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.262 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.262 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.262 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.262 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.262 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.262 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.262 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.262 "name": "raid_bdev1", 00:16:19.262 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:19.262 "strip_size_kb": 0, 00:16:19.262 "state": "online", 00:16:19.262 "raid_level": "raid1", 00:16:19.262 "superblock": true, 00:16:19.262 "num_base_bdevs": 2, 00:16:19.262 "num_base_bdevs_discovered": 1, 00:16:19.262 "num_base_bdevs_operational": 1, 00:16:19.262 "base_bdevs_list": [ 00:16:19.262 { 00:16:19.262 "name": null, 00:16:19.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.262 "is_configured": false, 00:16:19.262 "data_offset": 0, 00:16:19.262 "data_size": 7936 00:16:19.262 }, 00:16:19.262 { 00:16:19.262 "name": "BaseBdev2", 00:16:19.262 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:19.262 "is_configured": true, 00:16:19.262 "data_offset": 256, 00:16:19.262 "data_size": 7936 00:16:19.262 } 00:16:19.262 ] 00:16:19.262 }' 00:16:19.262 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.262 05:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.522 05:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:19.522 05:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.522 05:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.522 [2024-12-14 05:05:30.308543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:19.522 [2024-12-14 05:05:30.308684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.522 [2024-12-14 05:05:30.308738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:19.522 [2024-12-14 05:05:30.308775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.522 [2024-12-14 05:05:30.309044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.522 [2024-12-14 05:05:30.309100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:19.522 [2024-12-14 05:05:30.309208] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:19.522 [2024-12-14 05:05:30.309253] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:19.522 [2024-12-14 05:05:30.309319] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:19.522 [2024-12-14 05:05:30.309393] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:19.522 [2024-12-14 05:05:30.313287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:19.522 spare 00:16:19.522 05:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.522 05:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:19.522 [2024-12-14 05:05:30.315481] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:20.461 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.461 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.461 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.461 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.461 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.461 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.461 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.461 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.461 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.461 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.721 "name": "raid_bdev1", 00:16:20.721 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:20.721 "strip_size_kb": 0, 00:16:20.721 "state": "online", 00:16:20.721 "raid_level": "raid1", 00:16:20.721 "superblock": true, 00:16:20.721 "num_base_bdevs": 2, 00:16:20.721 "num_base_bdevs_discovered": 2, 00:16:20.721 "num_base_bdevs_operational": 2, 00:16:20.721 "process": { 00:16:20.721 "type": "rebuild", 00:16:20.721 "target": "spare", 00:16:20.721 "progress": { 00:16:20.721 "blocks": 2560, 00:16:20.721 "percent": 32 00:16:20.721 } 00:16:20.721 }, 00:16:20.721 "base_bdevs_list": [ 00:16:20.721 { 00:16:20.721 "name": "spare", 00:16:20.721 "uuid": "cc7f7d7f-fc88-5f86-94cd-a1735e8c15ed", 00:16:20.721 "is_configured": true, 00:16:20.721 "data_offset": 256, 00:16:20.721 "data_size": 7936 00:16:20.721 }, 00:16:20.721 { 00:16:20.721 "name": "BaseBdev2", 00:16:20.721 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:20.721 "is_configured": true, 00:16:20.721 "data_offset": 256, 00:16:20.721 "data_size": 7936 00:16:20.721 } 00:16:20.721 ] 00:16:20.721 }' 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.721 [2024-12-14 05:05:31.456454] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:20.721 [2024-12-14 05:05:31.523413] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:20.721 [2024-12-14 05:05:31.523574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.721 [2024-12-14 05:05:31.523619] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:20.721 [2024-12-14 05:05:31.523650] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.721 "name": "raid_bdev1", 00:16:20.721 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:20.721 "strip_size_kb": 0, 00:16:20.721 "state": "online", 00:16:20.721 "raid_level": "raid1", 00:16:20.721 "superblock": true, 00:16:20.721 "num_base_bdevs": 2, 00:16:20.721 "num_base_bdevs_discovered": 1, 00:16:20.721 "num_base_bdevs_operational": 1, 00:16:20.721 "base_bdevs_list": [ 00:16:20.721 { 00:16:20.721 "name": null, 00:16:20.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.721 "is_configured": false, 00:16:20.721 "data_offset": 0, 00:16:20.721 "data_size": 7936 00:16:20.721 }, 00:16:20.721 { 00:16:20.721 "name": "BaseBdev2", 00:16:20.721 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:20.721 "is_configured": true, 00:16:20.721 "data_offset": 256, 00:16:20.721 "data_size": 7936 00:16:20.721 } 00:16:20.721 ] 00:16:20.721 }' 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.721 05:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.290 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:21.290 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.291 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:21.291 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:21.291 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.291 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.291 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.291 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.291 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.291 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.291 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.291 "name": "raid_bdev1", 00:16:21.291 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:21.291 "strip_size_kb": 0, 00:16:21.291 "state": "online", 00:16:21.291 "raid_level": "raid1", 00:16:21.291 "superblock": true, 00:16:21.291 "num_base_bdevs": 2, 00:16:21.291 "num_base_bdevs_discovered": 1, 00:16:21.291 "num_base_bdevs_operational": 1, 00:16:21.291 "base_bdevs_list": [ 00:16:21.291 { 00:16:21.291 "name": null, 00:16:21.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.291 "is_configured": false, 00:16:21.291 "data_offset": 0, 00:16:21.291 "data_size": 7936 00:16:21.291 }, 00:16:21.291 { 00:16:21.291 "name": "BaseBdev2", 00:16:21.291 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:21.291 "is_configured": true, 00:16:21.291 "data_offset": 256, 00:16:21.291 "data_size": 7936 00:16:21.291 } 00:16:21.291 ] 00:16:21.291 }' 00:16:21.291 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.291 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:21.291 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.291 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:21.291 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:21.291 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.291 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.291 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.291 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:21.291 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.291 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.291 [2024-12-14 05:05:32.160703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:21.291 [2024-12-14 05:05:32.160796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.291 [2024-12-14 05:05:32.160821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:21.291 [2024-12-14 05:05:32.160835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.291 [2024-12-14 05:05:32.161036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.291 [2024-12-14 05:05:32.161054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:21.291 [2024-12-14 05:05:32.161114] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:21.291 [2024-12-14 05:05:32.161148] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:21.291 [2024-12-14 05:05:32.161158] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:21.291 [2024-12-14 05:05:32.161195] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:21.291 BaseBdev1 00:16:21.291 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.291 05:05:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:22.672 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:22.672 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.672 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.672 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.672 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.672 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:22.672 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.672 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.672 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.672 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.672 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.672 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.672 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.672 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.672 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.672 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.672 "name": "raid_bdev1", 00:16:22.672 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:22.672 "strip_size_kb": 0, 00:16:22.672 "state": "online", 00:16:22.672 "raid_level": "raid1", 00:16:22.672 "superblock": true, 00:16:22.672 "num_base_bdevs": 2, 00:16:22.672 "num_base_bdevs_discovered": 1, 00:16:22.672 "num_base_bdevs_operational": 1, 00:16:22.672 "base_bdevs_list": [ 00:16:22.672 { 00:16:22.672 "name": null, 00:16:22.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.672 "is_configured": false, 00:16:22.672 "data_offset": 0, 00:16:22.672 "data_size": 7936 00:16:22.672 }, 00:16:22.672 { 00:16:22.672 "name": "BaseBdev2", 00:16:22.672 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:22.672 "is_configured": true, 00:16:22.672 "data_offset": 256, 00:16:22.672 "data_size": 7936 00:16:22.672 } 00:16:22.672 ] 00:16:22.672 }' 00:16:22.672 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.672 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.672 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:22.672 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.932 "name": "raid_bdev1", 00:16:22.932 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:22.932 "strip_size_kb": 0, 00:16:22.932 "state": "online", 00:16:22.932 "raid_level": "raid1", 00:16:22.932 "superblock": true, 00:16:22.932 "num_base_bdevs": 2, 00:16:22.932 "num_base_bdevs_discovered": 1, 00:16:22.932 "num_base_bdevs_operational": 1, 00:16:22.932 "base_bdevs_list": [ 00:16:22.932 { 00:16:22.932 "name": null, 00:16:22.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.932 "is_configured": false, 00:16:22.932 "data_offset": 0, 00:16:22.932 "data_size": 7936 00:16:22.932 }, 00:16:22.932 { 00:16:22.932 "name": "BaseBdev2", 00:16:22.932 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:22.932 "is_configured": true, 00:16:22.932 "data_offset": 256, 00:16:22.932 "data_size": 7936 00:16:22.932 } 00:16:22.932 ] 00:16:22.932 }' 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.932 [2024-12-14 05:05:33.714233] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.932 [2024-12-14 05:05:33.714502] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:22.932 [2024-12-14 05:05:33.714568] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:22.932 request: 00:16:22.932 { 00:16:22.932 "base_bdev": "BaseBdev1", 00:16:22.932 "raid_bdev": "raid_bdev1", 00:16:22.932 "method": "bdev_raid_add_base_bdev", 00:16:22.932 "req_id": 1 00:16:22.932 } 00:16:22.932 Got JSON-RPC error response 00:16:22.932 response: 00:16:22.932 { 00:16:22.932 "code": -22, 00:16:22.932 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:22.932 } 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:22.932 05:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:23.872 05:05:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:23.872 05:05:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.872 05:05:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.872 05:05:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.872 05:05:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.872 05:05:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:23.872 05:05:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.872 05:05:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.872 05:05:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.872 05:05:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.872 05:05:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.872 05:05:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.872 05:05:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.872 05:05:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.132 05:05:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.132 05:05:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.132 "name": "raid_bdev1", 00:16:24.132 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:24.132 "strip_size_kb": 0, 00:16:24.132 "state": "online", 00:16:24.132 "raid_level": "raid1", 00:16:24.132 "superblock": true, 00:16:24.132 "num_base_bdevs": 2, 00:16:24.132 "num_base_bdevs_discovered": 1, 00:16:24.132 "num_base_bdevs_operational": 1, 00:16:24.132 "base_bdevs_list": [ 00:16:24.132 { 00:16:24.132 "name": null, 00:16:24.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.132 "is_configured": false, 00:16:24.132 "data_offset": 0, 00:16:24.132 "data_size": 7936 00:16:24.132 }, 00:16:24.132 { 00:16:24.132 "name": "BaseBdev2", 00:16:24.132 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:24.132 "is_configured": true, 00:16:24.132 "data_offset": 256, 00:16:24.132 "data_size": 7936 00:16:24.132 } 00:16:24.132 ] 00:16:24.132 }' 00:16:24.132 05:05:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.132 05:05:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.392 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:24.392 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.392 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:24.392 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:24.392 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.392 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.392 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.392 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.392 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.392 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.392 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.392 "name": "raid_bdev1", 00:16:24.392 "uuid": "f06b18c2-052c-4c1d-89c6-7fc4ec34b846", 00:16:24.392 "strip_size_kb": 0, 00:16:24.392 "state": "online", 00:16:24.392 "raid_level": "raid1", 00:16:24.392 "superblock": true, 00:16:24.392 "num_base_bdevs": 2, 00:16:24.392 "num_base_bdevs_discovered": 1, 00:16:24.392 "num_base_bdevs_operational": 1, 00:16:24.392 "base_bdevs_list": [ 00:16:24.392 { 00:16:24.392 "name": null, 00:16:24.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.392 "is_configured": false, 00:16:24.392 "data_offset": 0, 00:16:24.392 "data_size": 7936 00:16:24.392 }, 00:16:24.392 { 00:16:24.392 "name": "BaseBdev2", 00:16:24.392 "uuid": "3fcbf2c4-8110-5a35-aee1-4047ea38eab5", 00:16:24.392 "is_configured": true, 00:16:24.392 "data_offset": 256, 00:16:24.392 "data_size": 7936 00:16:24.392 } 00:16:24.392 ] 00:16:24.392 }' 00:16:24.392 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.392 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:24.392 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.392 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:24.392 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 99344 00:16:24.393 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99344 ']' 00:16:24.393 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99344 00:16:24.393 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:24.753 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:24.753 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99344 00:16:24.753 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:24.753 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:24.753 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99344' 00:16:24.753 killing process with pid 99344 00:16:24.753 Received shutdown signal, test time was about 60.000000 seconds 00:16:24.753 00:16:24.753 Latency(us) 00:16:24.753 [2024-12-14T05:05:35.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.753 [2024-12-14T05:05:35.636Z] =================================================================================================================== 00:16:24.753 [2024-12-14T05:05:35.636Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:24.753 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 99344 00:16:24.753 [2024-12-14 05:05:35.309487] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:24.753 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 99344 00:16:24.753 [2024-12-14 05:05:35.309635] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.753 [2024-12-14 05:05:35.309697] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:24.753 [2024-12-14 05:05:35.309708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:24.753 [2024-12-14 05:05:35.372686] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:25.073 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:16:25.073 00:16:25.073 real 0m16.343s 00:16:25.073 user 0m21.543s 00:16:25.073 sys 0m1.819s 00:16:25.073 ************************************ 00:16:25.073 END TEST raid_rebuild_test_sb_md_interleaved 00:16:25.073 ************************************ 00:16:25.073 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:25.073 05:05:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.073 05:05:35 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:16:25.073 05:05:35 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:16:25.073 05:05:35 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 99344 ']' 00:16:25.073 05:05:35 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 99344 00:16:25.073 05:05:35 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:16:25.073 ************************************ 00:16:25.073 END TEST bdev_raid 00:16:25.073 ************************************ 00:16:25.073 00:16:25.073 real 9m56.016s 00:16:25.073 user 14m6.287s 00:16:25.073 sys 1m47.673s 00:16:25.073 05:05:35 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:25.073 05:05:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:25.073 05:05:35 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:25.073 05:05:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:25.073 05:05:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:25.073 05:05:35 -- common/autotest_common.sh@10 -- # set +x 00:16:25.073 ************************************ 00:16:25.073 START TEST spdkcli_raid 00:16:25.073 ************************************ 00:16:25.073 05:05:35 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:25.345 * Looking for test storage... 00:16:25.345 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:25.345 05:05:36 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:25.345 05:05:36 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:16:25.345 05:05:36 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:25.345 05:05:36 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:25.345 05:05:36 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:16:25.345 05:05:36 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:25.345 05:05:36 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:25.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.345 --rc genhtml_branch_coverage=1 00:16:25.345 --rc genhtml_function_coverage=1 00:16:25.345 --rc genhtml_legend=1 00:16:25.345 --rc geninfo_all_blocks=1 00:16:25.345 --rc geninfo_unexecuted_blocks=1 00:16:25.345 00:16:25.345 ' 00:16:25.345 05:05:36 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:25.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.345 --rc genhtml_branch_coverage=1 00:16:25.345 --rc genhtml_function_coverage=1 00:16:25.345 --rc genhtml_legend=1 00:16:25.345 --rc geninfo_all_blocks=1 00:16:25.345 --rc geninfo_unexecuted_blocks=1 00:16:25.345 00:16:25.345 ' 00:16:25.345 05:05:36 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:25.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.345 --rc genhtml_branch_coverage=1 00:16:25.345 --rc genhtml_function_coverage=1 00:16:25.345 --rc genhtml_legend=1 00:16:25.345 --rc geninfo_all_blocks=1 00:16:25.345 --rc geninfo_unexecuted_blocks=1 00:16:25.345 00:16:25.345 ' 00:16:25.345 05:05:36 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:25.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.345 --rc genhtml_branch_coverage=1 00:16:25.345 --rc genhtml_function_coverage=1 00:16:25.345 --rc genhtml_legend=1 00:16:25.345 --rc geninfo_all_blocks=1 00:16:25.345 --rc geninfo_unexecuted_blocks=1 00:16:25.345 00:16:25.345 ' 00:16:25.345 05:05:36 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:25.345 05:05:36 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:25.345 05:05:36 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:25.345 05:05:36 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:16:25.345 05:05:36 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:16:25.345 05:05:36 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:16:25.345 05:05:36 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:16:25.345 05:05:36 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:16:25.345 05:05:36 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:16:25.345 05:05:36 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:16:25.345 05:05:36 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:16:25.345 05:05:36 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:16:25.345 05:05:36 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:16:25.345 05:05:36 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:16:25.345 05:05:36 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:16:25.345 05:05:36 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:16:25.345 05:05:36 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:16:25.345 05:05:36 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:16:25.345 05:05:36 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:16:25.345 05:05:36 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:16:25.345 05:05:36 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:16:25.345 05:05:36 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:16:25.345 05:05:36 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:16:25.345 05:05:36 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:16:25.345 05:05:36 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:16:25.345 05:05:36 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:25.345 05:05:36 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:25.345 05:05:36 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:25.345 05:05:36 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:25.345 05:05:36 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:25.345 05:05:36 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:25.345 05:05:36 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:16:25.345 05:05:36 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:16:25.345 05:05:36 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:25.345 05:05:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:25.345 05:05:36 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:16:25.345 05:05:36 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=100014 00:16:25.345 05:05:36 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:16:25.346 05:05:36 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 100014 00:16:25.346 05:05:36 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 100014 ']' 00:16:25.346 05:05:36 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.346 05:05:36 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:25.346 05:05:36 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.346 05:05:36 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:25.346 05:05:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:25.606 [2024-12-14 05:05:36.290871] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:25.606 [2024-12-14 05:05:36.291127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100014 ] 00:16:25.606 [2024-12-14 05:05:36.458277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:25.866 [2024-12-14 05:05:36.535378] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.866 [2024-12-14 05:05:36.535487] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.435 05:05:37 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:26.435 05:05:37 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:16:26.435 05:05:37 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:16:26.435 05:05:37 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:26.435 05:05:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:26.435 05:05:37 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:16:26.435 05:05:37 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:26.435 05:05:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:26.435 05:05:37 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:16:26.435 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:16:26.435 ' 00:16:27.816 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:16:27.816 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:16:28.075 05:05:38 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:16:28.075 05:05:38 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:28.075 05:05:38 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:28.075 05:05:38 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:16:28.075 05:05:38 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:28.075 05:05:38 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:28.075 05:05:38 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:16:28.075 ' 00:16:29.014 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:16:29.274 05:05:39 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:16:29.274 05:05:39 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:29.274 05:05:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:29.274 05:05:40 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:16:29.274 05:05:40 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:29.274 05:05:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:29.274 05:05:40 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:16:29.274 05:05:40 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:16:29.843 05:05:40 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:16:29.843 05:05:40 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:16:29.843 05:05:40 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:16:29.843 05:05:40 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:29.843 05:05:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:29.843 05:05:40 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:16:29.843 05:05:40 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:29.843 05:05:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:29.844 05:05:40 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:16:29.844 ' 00:16:30.783 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:16:31.042 05:05:41 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:16:31.042 05:05:41 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:31.042 05:05:41 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:31.042 05:05:41 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:16:31.042 05:05:41 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:31.042 05:05:41 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:31.042 05:05:41 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:16:31.042 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:16:31.042 ' 00:16:32.422 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:16:32.422 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:16:32.422 05:05:43 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:16:32.422 05:05:43 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:32.422 05:05:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:32.422 05:05:43 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 100014 00:16:32.422 05:05:43 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 100014 ']' 00:16:32.422 05:05:43 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 100014 00:16:32.422 05:05:43 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:16:32.422 05:05:43 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:32.422 05:05:43 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100014 00:16:32.422 killing process with pid 100014 00:16:32.422 05:05:43 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:32.422 05:05:43 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:32.422 05:05:43 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100014' 00:16:32.422 05:05:43 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 100014 00:16:32.422 05:05:43 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 100014 00:16:33.362 05:05:43 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:16:33.362 05:05:43 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 100014 ']' 00:16:33.362 05:05:43 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 100014 00:16:33.362 05:05:43 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 100014 ']' 00:16:33.362 05:05:43 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 100014 00:16:33.362 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (100014) - No such process 00:16:33.362 Process with pid 100014 is not found 00:16:33.362 05:05:43 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 100014 is not found' 00:16:33.362 05:05:43 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:16:33.362 05:05:43 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:16:33.362 05:05:43 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:16:33.362 05:05:43 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:16:33.362 00:16:33.362 real 0m8.009s 00:16:33.362 user 0m16.520s 00:16:33.362 sys 0m1.308s 00:16:33.362 05:05:43 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:33.362 ************************************ 00:16:33.362 END TEST spdkcli_raid 00:16:33.362 ************************************ 00:16:33.362 05:05:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:33.362 05:05:43 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:33.362 05:05:43 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:33.362 05:05:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:33.362 05:05:43 -- common/autotest_common.sh@10 -- # set +x 00:16:33.362 ************************************ 00:16:33.362 START TEST blockdev_raid5f 00:16:33.362 ************************************ 00:16:33.362 05:05:43 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:33.362 * Looking for test storage... 00:16:33.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:33.362 05:05:44 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:33.362 05:05:44 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:16:33.362 05:05:44 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:33.362 05:05:44 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:33.362 05:05:44 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:16:33.362 05:05:44 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:33.362 05:05:44 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:33.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.362 --rc genhtml_branch_coverage=1 00:16:33.362 --rc genhtml_function_coverage=1 00:16:33.362 --rc genhtml_legend=1 00:16:33.362 --rc geninfo_all_blocks=1 00:16:33.362 --rc geninfo_unexecuted_blocks=1 00:16:33.362 00:16:33.362 ' 00:16:33.362 05:05:44 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:33.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.362 --rc genhtml_branch_coverage=1 00:16:33.362 --rc genhtml_function_coverage=1 00:16:33.362 --rc genhtml_legend=1 00:16:33.362 --rc geninfo_all_blocks=1 00:16:33.362 --rc geninfo_unexecuted_blocks=1 00:16:33.362 00:16:33.362 ' 00:16:33.362 05:05:44 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:33.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.362 --rc genhtml_branch_coverage=1 00:16:33.362 --rc genhtml_function_coverage=1 00:16:33.362 --rc genhtml_legend=1 00:16:33.362 --rc geninfo_all_blocks=1 00:16:33.362 --rc geninfo_unexecuted_blocks=1 00:16:33.362 00:16:33.362 ' 00:16:33.362 05:05:44 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:33.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.363 --rc genhtml_branch_coverage=1 00:16:33.363 --rc genhtml_function_coverage=1 00:16:33.363 --rc genhtml_legend=1 00:16:33.363 --rc geninfo_all_blocks=1 00:16:33.363 --rc geninfo_unexecuted_blocks=1 00:16:33.363 00:16:33.363 ' 00:16:33.363 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:33.363 05:05:44 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:16:33.363 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:33.363 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:33.363 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:33.363 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:33.363 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:33.363 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:33.363 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:16:33.363 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:16:33.363 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:16:33.363 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:16:33.623 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:16:33.623 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:16:33.623 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:16:33.623 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:16:33.623 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:16:33.623 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:16:33.623 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:16:33.623 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:16:33.623 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:16:33.623 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:16:33.623 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:16:33.623 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:16:33.623 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=100276 00:16:33.623 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:33.623 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:33.623 05:05:44 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 100276 00:16:33.623 05:05:44 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 100276 ']' 00:16:33.623 05:05:44 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.623 05:05:44 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:33.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.623 05:05:44 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.623 05:05:44 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:33.624 05:05:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:33.624 [2024-12-14 05:05:44.347812] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:33.624 [2024-12-14 05:05:44.347926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100276 ] 00:16:33.883 [2024-12-14 05:05:44.508977] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.883 [2024-12-14 05:05:44.579754] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.452 05:05:45 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:34.452 05:05:45 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:16:34.452 05:05:45 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:16:34.452 05:05:45 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:16:34.452 05:05:45 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:16:34.452 05:05:45 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.452 05:05:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:34.452 Malloc0 00:16:34.452 Malloc1 00:16:34.452 Malloc2 00:16:34.452 05:05:45 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.452 05:05:45 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:16:34.452 05:05:45 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.452 05:05:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:34.452 05:05:45 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.452 05:05:45 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:16:34.452 05:05:45 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:16:34.452 05:05:45 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.452 05:05:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:34.452 05:05:45 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.452 05:05:45 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:16:34.452 05:05:45 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.452 05:05:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:34.452 05:05:45 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.452 05:05:45 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:34.452 05:05:45 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.452 05:05:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:34.452 05:05:45 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.452 05:05:45 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:16:34.452 05:05:45 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:16:34.452 05:05:45 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:16:34.452 05:05:45 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.452 05:05:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:34.711 05:05:45 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.711 05:05:45 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:16:34.711 05:05:45 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:16:34.711 05:05:45 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "f4bafbe0-78c5-42b3-a898-7ddec7b68785"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f4bafbe0-78c5-42b3-a898-7ddec7b68785",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "f4bafbe0-78c5-42b3-a898-7ddec7b68785",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "cdab747a-b172-42e5-aee1-380ddff553cd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "a2bccec7-76f8-46f9-814d-53ae490d59d8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "bce42e7e-439c-4725-bd7d-dcaf9098d953",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:16:34.711 05:05:45 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:16:34.711 05:05:45 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:16:34.711 05:05:45 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:16:34.711 05:05:45 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 100276 00:16:34.711 05:05:45 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 100276 ']' 00:16:34.711 05:05:45 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 100276 00:16:34.711 05:05:45 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:16:34.711 05:05:45 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:34.711 05:05:45 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100276 00:16:34.711 killing process with pid 100276 00:16:34.711 05:05:45 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:34.711 05:05:45 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:34.711 05:05:45 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100276' 00:16:34.711 05:05:45 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 100276 00:16:34.711 05:05:45 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 100276 00:16:35.280 05:05:46 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:35.280 05:05:46 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:35.280 05:05:46 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:35.280 05:05:46 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:35.280 05:05:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:35.539 ************************************ 00:16:35.539 START TEST bdev_hello_world 00:16:35.539 ************************************ 00:16:35.539 05:05:46 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:35.539 [2024-12-14 05:05:46.252490] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:35.539 [2024-12-14 05:05:46.252626] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100321 ] 00:16:35.539 [2024-12-14 05:05:46.417325] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.799 [2024-12-14 05:05:46.491859] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.059 [2024-12-14 05:05:46.746329] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:36.059 [2024-12-14 05:05:46.746382] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:16:36.059 [2024-12-14 05:05:46.746400] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:36.059 [2024-12-14 05:05:46.746742] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:36.059 [2024-12-14 05:05:46.746902] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:36.059 [2024-12-14 05:05:46.746920] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:36.059 [2024-12-14 05:05:46.746986] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:36.059 00:16:36.059 [2024-12-14 05:05:46.747005] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:36.319 00:16:36.319 real 0m1.001s 00:16:36.319 user 0m0.559s 00:16:36.319 sys 0m0.326s 00:16:36.319 ************************************ 00:16:36.319 END TEST bdev_hello_world 00:16:36.319 ************************************ 00:16:36.319 05:05:47 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:36.319 05:05:47 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:36.579 05:05:47 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:16:36.579 05:05:47 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:36.579 05:05:47 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:36.579 05:05:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:36.579 ************************************ 00:16:36.579 START TEST bdev_bounds 00:16:36.579 ************************************ 00:16:36.579 05:05:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:16:36.579 05:05:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=100352 00:16:36.579 Process bdevio pid: 100352 00:16:36.579 05:05:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:36.579 05:05:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:36.579 05:05:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 100352' 00:16:36.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.579 05:05:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 100352 00:16:36.579 05:05:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 100352 ']' 00:16:36.579 05:05:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.579 05:05:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:36.579 05:05:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.579 05:05:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:36.579 05:05:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:36.579 [2024-12-14 05:05:47.331246] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:36.579 [2024-12-14 05:05:47.331364] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100352 ] 00:16:36.839 [2024-12-14 05:05:47.492402] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:36.839 [2024-12-14 05:05:47.566522] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.839 [2024-12-14 05:05:47.566635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.839 [2024-12-14 05:05:47.566734] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.408 05:05:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:37.408 05:05:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:16:37.408 05:05:48 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:37.408 I/O targets: 00:16:37.408 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:16:37.408 00:16:37.408 00:16:37.408 CUnit - A unit testing framework for C - Version 2.1-3 00:16:37.408 http://cunit.sourceforge.net/ 00:16:37.408 00:16:37.408 00:16:37.408 Suite: bdevio tests on: raid5f 00:16:37.408 Test: blockdev write read block ...passed 00:16:37.408 Test: blockdev write zeroes read block ...passed 00:16:37.408 Test: blockdev write zeroes read no split ...passed 00:16:37.668 Test: blockdev write zeroes read split ...passed 00:16:37.668 Test: blockdev write zeroes read split partial ...passed 00:16:37.668 Test: blockdev reset ...passed 00:16:37.668 Test: blockdev write read 8 blocks ...passed 00:16:37.668 Test: blockdev write read size > 128k ...passed 00:16:37.668 Test: blockdev write read invalid size ...passed 00:16:37.668 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:37.668 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:37.668 Test: blockdev write read max offset ...passed 00:16:37.668 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:37.668 Test: blockdev writev readv 8 blocks ...passed 00:16:37.668 Test: blockdev writev readv 30 x 1block ...passed 00:16:37.668 Test: blockdev writev readv block ...passed 00:16:37.668 Test: blockdev writev readv size > 128k ...passed 00:16:37.668 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:37.668 Test: blockdev comparev and writev ...passed 00:16:37.668 Test: blockdev nvme passthru rw ...passed 00:16:37.668 Test: blockdev nvme passthru vendor specific ...passed 00:16:37.668 Test: blockdev nvme admin passthru ...passed 00:16:37.668 Test: blockdev copy ...passed 00:16:37.668 00:16:37.668 Run Summary: Type Total Ran Passed Failed Inactive 00:16:37.668 suites 1 1 n/a 0 0 00:16:37.668 tests 23 23 23 0 0 00:16:37.668 asserts 130 130 130 0 n/a 00:16:37.668 00:16:37.668 Elapsed time = 0.335 seconds 00:16:37.668 0 00:16:37.668 05:05:48 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 100352 00:16:37.668 05:05:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 100352 ']' 00:16:37.668 05:05:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 100352 00:16:37.668 05:05:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:16:37.668 05:05:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:37.668 05:05:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100352 00:16:37.668 05:05:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:37.668 05:05:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:37.668 killing process with pid 100352 00:16:37.668 05:05:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100352' 00:16:37.668 05:05:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 100352 00:16:37.668 05:05:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 100352 00:16:38.238 05:05:48 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:38.238 00:16:38.238 real 0m1.632s 00:16:38.238 user 0m3.705s 00:16:38.238 sys 0m0.445s 00:16:38.238 ************************************ 00:16:38.238 END TEST bdev_bounds 00:16:38.238 ************************************ 00:16:38.238 05:05:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:38.238 05:05:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:38.238 05:05:48 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:38.238 05:05:48 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:38.238 05:05:48 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:38.238 05:05:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:38.238 ************************************ 00:16:38.238 START TEST bdev_nbd 00:16:38.238 ************************************ 00:16:38.238 05:05:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:38.238 05:05:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:38.238 05:05:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:38.238 05:05:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:38.238 05:05:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:38.238 05:05:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:16:38.238 05:05:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:38.238 05:05:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:16:38.238 05:05:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:38.238 05:05:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:38.238 05:05:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:38.238 05:05:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:16:38.238 05:05:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:16:38.238 05:05:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:38.238 05:05:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:16:38.238 05:05:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:38.238 05:05:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=100400 00:16:38.238 05:05:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:38.238 05:05:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:38.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:38.238 05:05:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 100400 /var/tmp/spdk-nbd.sock 00:16:38.239 05:05:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 100400 ']' 00:16:38.239 05:05:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:38.239 05:05:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:38.239 05:05:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:38.239 05:05:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:38.239 05:05:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:38.239 [2024-12-14 05:05:49.043560] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:38.239 [2024-12-14 05:05:49.043673] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.498 [2024-12-14 05:05:49.205969] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.499 [2024-12-14 05:05:49.278234] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.068 05:05:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:39.068 05:05:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:16:39.068 05:05:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:16:39.069 05:05:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:39.069 05:05:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:16:39.069 05:05:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:39.069 05:05:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:16:39.069 05:05:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:39.069 05:05:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:16:39.069 05:05:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:39.069 05:05:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:39.069 05:05:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:39.069 05:05:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:39.069 05:05:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:39.069 05:05:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:16:39.329 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:39.329 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:39.329 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:39.329 05:05:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:39.329 05:05:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:39.329 05:05:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:39.329 05:05:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:39.329 05:05:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:39.329 05:05:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:39.329 05:05:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:39.329 05:05:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:39.329 05:05:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:39.329 1+0 records in 00:16:39.329 1+0 records out 00:16:39.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026576 s, 15.4 MB/s 00:16:39.329 05:05:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.329 05:05:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:39.329 05:05:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.329 05:05:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:39.329 05:05:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:39.329 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:39.329 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:39.329 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:39.589 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:39.589 { 00:16:39.589 "nbd_device": "/dev/nbd0", 00:16:39.589 "bdev_name": "raid5f" 00:16:39.589 } 00:16:39.589 ]' 00:16:39.589 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:39.589 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:39.589 { 00:16:39.589 "nbd_device": "/dev/nbd0", 00:16:39.589 "bdev_name": "raid5f" 00:16:39.589 } 00:16:39.589 ]' 00:16:39.589 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:39.589 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:39.589 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:39.589 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:39.589 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:39.589 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:39.589 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:39.589 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:39.849 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:39.849 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:39.849 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:39.849 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:39.849 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:39.849 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:39.849 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:39.849 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:39.849 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:39.849 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:39.849 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:40.109 05:05:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:16:40.368 /dev/nbd0 00:16:40.368 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:40.368 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:40.368 05:05:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:40.368 05:05:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:40.368 05:05:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:40.368 05:05:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:40.368 05:05:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:40.368 05:05:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:40.368 05:05:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:40.368 05:05:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:40.368 05:05:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:40.368 1+0 records in 00:16:40.368 1+0 records out 00:16:40.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452639 s, 9.0 MB/s 00:16:40.368 05:05:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.368 05:05:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:40.368 05:05:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.368 05:05:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:40.368 05:05:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:40.368 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:40.368 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:40.368 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:40.368 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:40.368 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:40.627 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:40.627 { 00:16:40.627 "nbd_device": "/dev/nbd0", 00:16:40.627 "bdev_name": "raid5f" 00:16:40.627 } 00:16:40.627 ]' 00:16:40.627 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:40.627 { 00:16:40.627 "nbd_device": "/dev/nbd0", 00:16:40.627 "bdev_name": "raid5f" 00:16:40.627 } 00:16:40.627 ]' 00:16:40.627 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:40.627 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:40.627 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:40.627 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:40.627 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:16:40.627 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:16:40.627 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:16:40.627 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:16:40.627 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:16:40.627 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:40.627 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:40.627 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:40.627 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:40.627 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:40.627 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:40.627 256+0 records in 00:16:40.628 256+0 records out 00:16:40.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139612 s, 75.1 MB/s 00:16:40.628 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:40.628 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:40.628 256+0 records in 00:16:40.628 256+0 records out 00:16:40.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281532 s, 37.2 MB/s 00:16:40.628 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:16:40.628 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:40.628 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:40.628 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:40.628 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:40.628 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:40.628 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:40.628 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:40.628 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:40.628 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:40.628 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:40.628 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:40.628 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:40.628 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:40.628 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:40.628 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:40.628 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:40.887 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:40.887 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:40.887 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:40.887 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:40.887 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:40.887 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:40.887 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:40.887 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:40.888 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:40.888 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:40.888 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:41.148 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:41.148 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:41.148 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:41.148 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:41.148 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:41.148 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:41.148 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:41.148 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:41.148 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:41.148 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:41.148 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:41.148 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:41.148 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:41.148 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:41.148 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:41.148 05:05:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:41.408 malloc_lvol_verify 00:16:41.408 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:41.668 ffa49a5b-870d-478b-9431-95c41efac65f 00:16:41.668 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:41.668 d77544a5-dda5-4291-b67e-1e86155bdd1c 00:16:41.668 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:41.928 /dev/nbd0 00:16:41.928 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:41.928 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:41.928 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:41.928 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:41.928 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:41.928 mke2fs 1.47.0 (5-Feb-2023) 00:16:41.928 Discarding device blocks: 0/4096 done 00:16:41.928 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:41.928 00:16:41.928 Allocating group tables: 0/1 done 00:16:41.928 Writing inode tables: 0/1 done 00:16:41.928 Creating journal (1024 blocks): done 00:16:41.928 Writing superblocks and filesystem accounting information: 0/1 done 00:16:41.928 00:16:41.928 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:41.928 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:41.928 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:41.928 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:41.928 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:41.928 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:41.928 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:42.188 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:42.188 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:42.188 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:42.188 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:42.188 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:42.188 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:42.188 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:42.188 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:42.188 05:05:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 100400 00:16:42.188 05:05:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 100400 ']' 00:16:42.188 05:05:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 100400 00:16:42.188 05:05:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:16:42.188 05:05:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:42.188 05:05:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100400 00:16:42.188 killing process with pid 100400 00:16:42.188 05:05:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:42.188 05:05:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:42.188 05:05:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100400' 00:16:42.188 05:05:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 100400 00:16:42.188 05:05:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 100400 00:16:42.759 ************************************ 00:16:42.759 END TEST bdev_nbd 00:16:42.759 ************************************ 00:16:42.759 05:05:53 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:42.759 00:16:42.759 real 0m4.498s 00:16:42.759 user 0m6.322s 00:16:42.759 sys 0m1.364s 00:16:42.759 05:05:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:42.759 05:05:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:42.759 05:05:53 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:16:42.759 05:05:53 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:16:42.759 05:05:53 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:16:42.759 05:05:53 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:16:42.759 05:05:53 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:42.759 05:05:53 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:42.759 05:05:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:42.759 ************************************ 00:16:42.759 START TEST bdev_fio 00:16:42.759 ************************************ 00:16:42.759 05:05:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:16:42.759 05:05:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:42.759 05:05:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:42.759 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:42.759 05:05:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:42.759 05:05:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:42.759 05:05:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:42.759 05:05:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:42.759 05:05:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:42.759 05:05:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:42.759 05:05:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:16:42.759 05:05:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:16:42.759 05:05:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:16:42.759 05:05:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:16:42.759 05:05:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:42.759 05:05:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:16:42.759 05:05:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:16:42.759 05:05:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:42.759 05:05:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:16:42.759 05:05:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:16:42.759 05:05:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:16:42.759 05:05:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:16:42.759 05:05:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:43.019 ************************************ 00:16:43.019 START TEST bdev_fio_rw_verify 00:16:43.019 ************************************ 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:43.019 05:05:53 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:43.279 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:43.279 fio-3.35 00:16:43.279 Starting 1 thread 00:16:55.500 00:16:55.500 job_raid5f: (groupid=0, jobs=1): err= 0: pid=100591: Sat Dec 14 05:06:04 2024 00:16:55.500 read: IOPS=12.6k, BW=49.4MiB/s (51.8MB/s)(494MiB/10001msec) 00:16:55.500 slat (nsec): min=16738, max=56164, avg=18420.08, stdev=1943.71 00:16:55.500 clat (usec): min=13, max=320, avg=128.34, stdev=43.79 00:16:55.500 lat (usec): min=31, max=345, avg=146.76, stdev=44.02 00:16:55.500 clat percentiles (usec): 00:16:55.500 | 50.000th=[ 131], 99.000th=[ 212], 99.900th=[ 241], 99.990th=[ 277], 00:16:55.500 | 99.999th=[ 293] 00:16:55.500 write: IOPS=13.2k, BW=51.6MiB/s (54.1MB/s)(510MiB/9878msec); 0 zone resets 00:16:55.500 slat (usec): min=7, max=245, avg=15.98, stdev= 3.59 00:16:55.500 clat (usec): min=59, max=1591, avg=290.86, stdev=42.29 00:16:55.500 lat (usec): min=74, max=1837, avg=306.84, stdev=43.49 00:16:55.500 clat percentiles (usec): 00:16:55.500 | 50.000th=[ 293], 99.000th=[ 367], 99.900th=[ 619], 99.990th=[ 1336], 00:16:55.500 | 99.999th=[ 1516] 00:16:55.500 bw ( KiB/s): min=49968, max=54856, per=98.88%, avg=52274.53, stdev=1239.12, samples=19 00:16:55.500 iops : min=12492, max=13714, avg=13068.63, stdev=309.78, samples=19 00:16:55.500 lat (usec) : 20=0.01%, 50=0.01%, 100=15.86%, 250=41.17%, 500=42.89% 00:16:55.500 lat (usec) : 750=0.05%, 1000=0.02% 00:16:55.500 lat (msec) : 2=0.01% 00:16:55.500 cpu : usr=98.98%, sys=0.36%, ctx=14, majf=0, minf=13386 00:16:55.500 IO depths : 1=7.6%, 2=19.9%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.500 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.500 issued rwts: total=126475,130549,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.500 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:55.500 00:16:55.500 Run status group 0 (all jobs): 00:16:55.500 READ: bw=49.4MiB/s (51.8MB/s), 49.4MiB/s-49.4MiB/s (51.8MB/s-51.8MB/s), io=494MiB (518MB), run=10001-10001msec 00:16:55.500 WRITE: bw=51.6MiB/s (54.1MB/s), 51.6MiB/s-51.6MiB/s (54.1MB/s-54.1MB/s), io=510MiB (535MB), run=9878-9878msec 00:16:55.500 ----------------------------------------------------- 00:16:55.500 Suppressions used: 00:16:55.500 count bytes template 00:16:55.500 1 7 /usr/src/fio/parse.c 00:16:55.500 154 14784 /usr/src/fio/iolog.c 00:16:55.500 1 8 libtcmalloc_minimal.so 00:16:55.500 1 904 libcrypto.so 00:16:55.500 ----------------------------------------------------- 00:16:55.500 00:16:55.500 00:16:55.500 real 0m11.419s 00:16:55.500 user 0m11.832s 00:16:55.500 sys 0m0.734s 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:55.500 ************************************ 00:16:55.500 END TEST bdev_fio_rw_verify 00:16:55.500 ************************************ 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "f4bafbe0-78c5-42b3-a898-7ddec7b68785"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f4bafbe0-78c5-42b3-a898-7ddec7b68785",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "f4bafbe0-78c5-42b3-a898-7ddec7b68785",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "cdab747a-b172-42e5-aee1-380ddff553cd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "a2bccec7-76f8-46f9-814d-53ae490d59d8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "bce42e7e-439c-4725-bd7d-dcaf9098d953",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:55.500 /home/vagrant/spdk_repo/spdk 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:55.500 00:16:55.500 real 0m11.715s 00:16:55.500 user 0m11.956s 00:16:55.500 sys 0m0.877s 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:55.500 05:06:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:55.500 ************************************ 00:16:55.500 END TEST bdev_fio 00:16:55.500 ************************************ 00:16:55.500 05:06:05 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:55.500 05:06:05 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:55.500 05:06:05 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:16:55.500 05:06:05 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:55.500 05:06:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:55.500 ************************************ 00:16:55.500 START TEST bdev_verify 00:16:55.500 ************************************ 00:16:55.500 05:06:05 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:55.500 [2024-12-14 05:06:05.410239] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:55.500 [2024-12-14 05:06:05.410366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100746 ] 00:16:55.500 [2024-12-14 05:06:05.576670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:55.500 [2024-12-14 05:06:05.664151] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.500 [2024-12-14 05:06:05.664283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.500 Running I/O for 5 seconds... 00:16:57.452 11070.00 IOPS, 43.24 MiB/s [2024-12-14T05:06:09.274Z] 11181.00 IOPS, 43.68 MiB/s [2024-12-14T05:06:10.213Z] 11220.67 IOPS, 43.83 MiB/s [2024-12-14T05:06:11.153Z] 11214.25 IOPS, 43.81 MiB/s [2024-12-14T05:06:11.153Z] 11229.80 IOPS, 43.87 MiB/s 00:17:00.270 Latency(us) 00:17:00.270 [2024-12-14T05:06:11.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.270 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:00.270 Verification LBA range: start 0x0 length 0x2000 00:17:00.270 raid5f : 5.01 6764.83 26.43 0.00 0.00 28431.85 327.32 20376.26 00:17:00.270 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:00.270 Verification LBA range: start 0x2000 length 0x2000 00:17:00.270 raid5f : 5.02 4479.97 17.50 0.00 0.00 42856.34 120.73 30449.91 00:17:00.270 [2024-12-14T05:06:11.153Z] =================================================================================================================== 00:17:00.270 [2024-12-14T05:06:11.153Z] Total : 11244.80 43.92 0.00 0.00 34183.74 120.73 30449.91 00:17:00.530 ************************************ 00:17:00.530 END TEST bdev_verify 00:17:00.530 ************************************ 00:17:00.530 00:17:00.530 real 0m6.027s 00:17:00.530 user 0m11.009s 00:17:00.530 sys 0m0.362s 00:17:00.530 05:06:11 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:00.530 05:06:11 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:00.530 05:06:11 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:00.530 05:06:11 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:00.530 05:06:11 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:00.530 05:06:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:00.790 ************************************ 00:17:00.790 START TEST bdev_verify_big_io 00:17:00.790 ************************************ 00:17:00.790 05:06:11 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:00.790 [2024-12-14 05:06:11.511451] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:00.790 [2024-12-14 05:06:11.511651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100828 ] 00:17:01.049 [2024-12-14 05:06:11.671755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:01.049 [2024-12-14 05:06:11.747508] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.049 [2024-12-14 05:06:11.747593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.309 Running I/O for 5 seconds... 00:17:03.187 633.00 IOPS, 39.56 MiB/s [2024-12-14T05:06:15.451Z] 761.00 IOPS, 47.56 MiB/s [2024-12-14T05:06:16.389Z] 803.00 IOPS, 50.19 MiB/s [2024-12-14T05:06:17.328Z] 808.75 IOPS, 50.55 MiB/s [2024-12-14T05:06:17.328Z] 812.00 IOPS, 50.75 MiB/s 00:17:06.445 Latency(us) 00:17:06.445 [2024-12-14T05:06:17.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.445 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:06.445 Verification LBA range: start 0x0 length 0x200 00:17:06.445 raid5f : 5.13 470.80 29.43 0.00 0.00 6801327.23 275.45 298546.53 00:17:06.445 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:06.445 Verification LBA range: start 0x200 length 0x200 00:17:06.445 raid5f : 5.29 359.92 22.49 0.00 0.00 8786186.74 194.96 380967.35 00:17:06.445 [2024-12-14T05:06:17.328Z] =================================================================================================================== 00:17:06.445 [2024-12-14T05:06:17.328Z] Total : 830.72 51.92 0.00 0.00 7676486.56 194.96 380967.35 00:17:07.016 00:17:07.016 real 0m6.279s 00:17:07.016 user 0m11.564s 00:17:07.016 sys 0m0.327s 00:17:07.016 05:06:17 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:07.016 05:06:17 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:07.016 ************************************ 00:17:07.016 END TEST bdev_verify_big_io 00:17:07.016 ************************************ 00:17:07.016 05:06:17 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:07.016 05:06:17 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:07.016 05:06:17 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:07.016 05:06:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:07.016 ************************************ 00:17:07.016 START TEST bdev_write_zeroes 00:17:07.016 ************************************ 00:17:07.016 05:06:17 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:07.016 [2024-12-14 05:06:17.870774] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:07.016 [2024-12-14 05:06:17.871014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100915 ] 00:17:07.276 [2024-12-14 05:06:18.035622] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.276 [2024-12-14 05:06:18.110463] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.535 Running I/O for 1 seconds... 00:17:08.916 30231.00 IOPS, 118.09 MiB/s 00:17:08.917 Latency(us) 00:17:08.917 [2024-12-14T05:06:19.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.917 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:08.917 raid5f : 1.01 30193.12 117.94 0.00 0.00 4225.44 1438.07 5952.61 00:17:08.917 [2024-12-14T05:06:19.800Z] =================================================================================================================== 00:17:08.917 [2024-12-14T05:06:19.800Z] Total : 30193.12 117.94 0.00 0.00 4225.44 1438.07 5952.61 00:17:08.917 ************************************ 00:17:08.917 END TEST bdev_write_zeroes 00:17:08.917 ************************************ 00:17:08.917 00:17:08.917 real 0m2.013s 00:17:08.917 user 0m1.572s 00:17:08.917 sys 0m0.320s 00:17:08.917 05:06:19 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:08.917 05:06:19 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:09.177 05:06:19 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:09.177 05:06:19 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:09.177 05:06:19 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:09.177 05:06:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:09.177 ************************************ 00:17:09.177 START TEST bdev_json_nonenclosed 00:17:09.177 ************************************ 00:17:09.177 05:06:19 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:09.177 [2024-12-14 05:06:19.958104] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:09.177 [2024-12-14 05:06:19.958255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100957 ] 00:17:09.437 [2024-12-14 05:06:20.118952] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.437 [2024-12-14 05:06:20.199719] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.437 [2024-12-14 05:06:20.199855] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:09.437 [2024-12-14 05:06:20.199885] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:09.437 [2024-12-14 05:06:20.199898] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:09.697 00:17:09.697 real 0m0.487s 00:17:09.697 user 0m0.218s 00:17:09.697 sys 0m0.164s 00:17:09.697 ************************************ 00:17:09.697 END TEST bdev_json_nonenclosed 00:17:09.697 ************************************ 00:17:09.697 05:06:20 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:09.697 05:06:20 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:09.697 05:06:20 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:09.697 05:06:20 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:09.697 05:06:20 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:09.697 05:06:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:09.697 ************************************ 00:17:09.697 START TEST bdev_json_nonarray 00:17:09.697 ************************************ 00:17:09.697 05:06:20 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:09.697 [2024-12-14 05:06:20.519328] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:09.697 [2024-12-14 05:06:20.519530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100983 ] 00:17:09.957 [2024-12-14 05:06:20.681276] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.957 [2024-12-14 05:06:20.754290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.957 [2024-12-14 05:06:20.754435] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:09.957 [2024-12-14 05:06:20.754464] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:09.957 [2024-12-14 05:06:20.754477] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:10.217 00:17:10.217 real 0m0.479s 00:17:10.217 user 0m0.223s 00:17:10.217 sys 0m0.152s 00:17:10.217 ************************************ 00:17:10.217 END TEST bdev_json_nonarray 00:17:10.217 ************************************ 00:17:10.217 05:06:20 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:10.217 05:06:20 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:10.217 05:06:20 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:17:10.217 05:06:20 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:17:10.217 05:06:20 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:17:10.217 05:06:20 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:17:10.217 05:06:20 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:17:10.217 05:06:20 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:10.217 05:06:20 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:10.217 05:06:20 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:17:10.217 05:06:20 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:17:10.217 05:06:20 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:17:10.217 05:06:20 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:17:10.217 ************************************ 00:17:10.217 00:17:10.217 real 0m36.986s 00:17:10.217 user 0m49.152s 00:17:10.217 sys 0m5.623s 00:17:10.217 05:06:20 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:10.217 05:06:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:10.217 END TEST blockdev_raid5f 00:17:10.217 ************************************ 00:17:10.217 05:06:21 -- spdk/autotest.sh@194 -- # uname -s 00:17:10.217 05:06:21 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:17:10.217 05:06:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:10.217 05:06:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:10.217 05:06:21 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:17:10.217 05:06:21 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:17:10.217 05:06:21 -- spdk/autotest.sh@256 -- # timing_exit lib 00:17:10.217 05:06:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:10.217 05:06:21 -- common/autotest_common.sh@10 -- # set +x 00:17:10.477 05:06:21 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:17:10.477 05:06:21 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:17:10.477 05:06:21 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:17:10.477 05:06:21 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:17:10.477 05:06:21 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:10.477 05:06:21 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:10.477 05:06:21 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:17:10.477 05:06:21 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:17:10.477 05:06:21 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:17:10.477 05:06:21 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:10.477 05:06:21 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:17:10.477 05:06:21 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:17:10.477 05:06:21 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:17:10.477 05:06:21 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:17:10.477 05:06:21 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:17:10.477 05:06:21 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:17:10.477 05:06:21 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:17:10.477 05:06:21 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:17:10.477 05:06:21 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:17:10.477 05:06:21 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:17:10.477 05:06:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:10.477 05:06:21 -- common/autotest_common.sh@10 -- # set +x 00:17:10.477 05:06:21 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:17:10.477 05:06:21 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:17:10.477 05:06:21 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:17:10.477 05:06:21 -- common/autotest_common.sh@10 -- # set +x 00:17:13.020 INFO: APP EXITING 00:17:13.020 INFO: killing all VMs 00:17:13.020 INFO: killing vhost app 00:17:13.020 INFO: EXIT DONE 00:17:13.020 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:13.020 Waiting for block devices as requested 00:17:13.284 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:13.284 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:14.263 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:14.263 Cleaning 00:17:14.263 Removing: /var/run/dpdk/spdk0/config 00:17:14.263 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:17:14.263 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:17:14.263 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:17:14.263 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:17:14.263 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:17:14.263 Removing: /var/run/dpdk/spdk0/hugepage_info 00:17:14.263 Removing: /dev/shm/spdk_tgt_trace.pid69148 00:17:14.263 Removing: /var/run/dpdk/spdk0 00:17:14.263 Removing: /var/run/dpdk/spdk_pid100014 00:17:14.263 Removing: /var/run/dpdk/spdk_pid100276 00:17:14.263 Removing: /var/run/dpdk/spdk_pid100321 00:17:14.263 Removing: /var/run/dpdk/spdk_pid100352 00:17:14.263 Removing: /var/run/dpdk/spdk_pid100581 00:17:14.263 Removing: /var/run/dpdk/spdk_pid100746 00:17:14.263 Removing: /var/run/dpdk/spdk_pid100828 00:17:14.263 Removing: /var/run/dpdk/spdk_pid100915 00:17:14.263 Removing: /var/run/dpdk/spdk_pid100957 00:17:14.263 Removing: /var/run/dpdk/spdk_pid100983 00:17:14.263 Removing: /var/run/dpdk/spdk_pid68980 00:17:14.263 Removing: /var/run/dpdk/spdk_pid69148 00:17:14.263 Removing: /var/run/dpdk/spdk_pid69351 00:17:14.263 Removing: /var/run/dpdk/spdk_pid69438 00:17:14.263 Removing: /var/run/dpdk/spdk_pid69467 00:17:14.263 Removing: /var/run/dpdk/spdk_pid69573 00:17:14.263 Removing: /var/run/dpdk/spdk_pid69591 00:17:14.263 Removing: /var/run/dpdk/spdk_pid69779 00:17:14.263 Removing: /var/run/dpdk/spdk_pid69858 00:17:14.263 Removing: /var/run/dpdk/spdk_pid69932 00:17:14.263 Removing: /var/run/dpdk/spdk_pid70032 00:17:14.263 Removing: /var/run/dpdk/spdk_pid70118 00:17:14.263 Removing: /var/run/dpdk/spdk_pid70152 00:17:14.263 Removing: /var/run/dpdk/spdk_pid70194 00:17:14.263 Removing: /var/run/dpdk/spdk_pid70259 00:17:14.263 Removing: /var/run/dpdk/spdk_pid70387 00:17:14.263 Removing: /var/run/dpdk/spdk_pid70807 00:17:14.263 Removing: /var/run/dpdk/spdk_pid70854 00:17:14.545 Removing: /var/run/dpdk/spdk_pid70901 00:17:14.545 Removing: /var/run/dpdk/spdk_pid70917 00:17:14.545 Removing: /var/run/dpdk/spdk_pid70988 00:17:14.545 Removing: /var/run/dpdk/spdk_pid71004 00:17:14.545 Removing: /var/run/dpdk/spdk_pid71066 00:17:14.545 Removing: /var/run/dpdk/spdk_pid71078 00:17:14.545 Removing: /var/run/dpdk/spdk_pid71131 00:17:14.545 Removing: /var/run/dpdk/spdk_pid71149 00:17:14.545 Removing: /var/run/dpdk/spdk_pid71191 00:17:14.545 Removing: /var/run/dpdk/spdk_pid71209 00:17:14.545 Removing: /var/run/dpdk/spdk_pid71336 00:17:14.545 Removing: /var/run/dpdk/spdk_pid71378 00:17:14.545 Removing: /var/run/dpdk/spdk_pid71456 00:17:14.545 Removing: /var/run/dpdk/spdk_pid72626 00:17:14.545 Removing: /var/run/dpdk/spdk_pid72821 00:17:14.545 Removing: /var/run/dpdk/spdk_pid72950 00:17:14.545 Removing: /var/run/dpdk/spdk_pid73560 00:17:14.545 Removing: /var/run/dpdk/spdk_pid73755 00:17:14.545 Removing: /var/run/dpdk/spdk_pid73884 00:17:14.545 Removing: /var/run/dpdk/spdk_pid74489 00:17:14.545 Removing: /var/run/dpdk/spdk_pid74808 00:17:14.545 Removing: /var/run/dpdk/spdk_pid74937 00:17:14.545 Removing: /var/run/dpdk/spdk_pid76267 00:17:14.545 Removing: /var/run/dpdk/spdk_pid76509 00:17:14.545 Removing: /var/run/dpdk/spdk_pid76638 00:17:14.545 Removing: /var/run/dpdk/spdk_pid77972 00:17:14.545 Removing: /var/run/dpdk/spdk_pid78210 00:17:14.545 Removing: /var/run/dpdk/spdk_pid78339 00:17:14.545 Removing: /var/run/dpdk/spdk_pid79674 00:17:14.545 Removing: /var/run/dpdk/spdk_pid80109 00:17:14.545 Removing: /var/run/dpdk/spdk_pid80238 00:17:14.545 Removing: /var/run/dpdk/spdk_pid81668 00:17:14.545 Removing: /var/run/dpdk/spdk_pid81915 00:17:14.545 Removing: /var/run/dpdk/spdk_pid82045 00:17:14.546 Removing: /var/run/dpdk/spdk_pid83475 00:17:14.546 Removing: /var/run/dpdk/spdk_pid83718 00:17:14.546 Removing: /var/run/dpdk/spdk_pid83852 00:17:14.546 Removing: /var/run/dpdk/spdk_pid85278 00:17:14.546 Removing: /var/run/dpdk/spdk_pid85754 00:17:14.546 Removing: /var/run/dpdk/spdk_pid85883 00:17:14.546 Removing: /var/run/dpdk/spdk_pid86016 00:17:14.546 Removing: /var/run/dpdk/spdk_pid86411 00:17:14.546 Removing: /var/run/dpdk/spdk_pid87113 00:17:14.546 Removing: /var/run/dpdk/spdk_pid87494 00:17:14.546 Removing: /var/run/dpdk/spdk_pid88169 00:17:14.546 Removing: /var/run/dpdk/spdk_pid88594 00:17:14.546 Removing: /var/run/dpdk/spdk_pid89332 00:17:14.546 Removing: /var/run/dpdk/spdk_pid89743 00:17:14.546 Removing: /var/run/dpdk/spdk_pid91669 00:17:14.546 Removing: /var/run/dpdk/spdk_pid92096 00:17:14.546 Removing: /var/run/dpdk/spdk_pid92525 00:17:14.546 Removing: /var/run/dpdk/spdk_pid94562 00:17:14.546 Removing: /var/run/dpdk/spdk_pid95031 00:17:14.546 Removing: /var/run/dpdk/spdk_pid95540 00:17:14.546 Removing: /var/run/dpdk/spdk_pid96568 00:17:14.546 Removing: /var/run/dpdk/spdk_pid96885 00:17:14.546 Removing: /var/run/dpdk/spdk_pid97800 00:17:14.546 Removing: /var/run/dpdk/spdk_pid98116 00:17:14.546 Removing: /var/run/dpdk/spdk_pid99031 00:17:14.546 Removing: /var/run/dpdk/spdk_pid99344 00:17:14.546 Clean 00:17:14.823 05:06:25 -- common/autotest_common.sh@1451 -- # return 0 00:17:14.823 05:06:25 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:17:14.823 05:06:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:14.823 05:06:25 -- common/autotest_common.sh@10 -- # set +x 00:17:14.823 05:06:25 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:17:14.823 05:06:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:14.823 05:06:25 -- common/autotest_common.sh@10 -- # set +x 00:17:14.823 05:06:25 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:14.823 05:06:25 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:17:14.823 05:06:25 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:17:14.823 05:06:25 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:17:14.823 05:06:25 -- spdk/autotest.sh@394 -- # hostname 00:17:14.823 05:06:25 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:17:15.083 geninfo: WARNING: invalid characters removed from testname! 00:17:41.649 05:06:48 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:41.649 05:06:51 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:42.588 05:06:53 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:44.494 05:06:55 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:46.402 05:06:57 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:48.311 05:06:59 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:50.221 05:07:00 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:17:50.221 05:07:01 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:17:50.221 05:07:01 -- common/autotest_common.sh@1681 -- $ lcov --version 00:17:50.221 05:07:01 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:17:50.221 05:07:01 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:17:50.221 05:07:01 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:17:50.221 05:07:01 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:17:50.221 05:07:01 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:17:50.221 05:07:01 -- scripts/common.sh@336 -- $ IFS=.-: 00:17:50.221 05:07:01 -- scripts/common.sh@336 -- $ read -ra ver1 00:17:50.221 05:07:01 -- scripts/common.sh@337 -- $ IFS=.-: 00:17:50.221 05:07:01 -- scripts/common.sh@337 -- $ read -ra ver2 00:17:50.221 05:07:01 -- scripts/common.sh@338 -- $ local 'op=<' 00:17:50.221 05:07:01 -- scripts/common.sh@340 -- $ ver1_l=2 00:17:50.221 05:07:01 -- scripts/common.sh@341 -- $ ver2_l=1 00:17:50.221 05:07:01 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:17:50.221 05:07:01 -- scripts/common.sh@344 -- $ case "$op" in 00:17:50.221 05:07:01 -- scripts/common.sh@345 -- $ : 1 00:17:50.221 05:07:01 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:17:50.221 05:07:01 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.221 05:07:01 -- scripts/common.sh@365 -- $ decimal 1 00:17:50.221 05:07:01 -- scripts/common.sh@353 -- $ local d=1 00:17:50.221 05:07:01 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:17:50.221 05:07:01 -- scripts/common.sh@355 -- $ echo 1 00:17:50.482 05:07:01 -- scripts/common.sh@365 -- $ ver1[v]=1 00:17:50.482 05:07:01 -- scripts/common.sh@366 -- $ decimal 2 00:17:50.482 05:07:01 -- scripts/common.sh@353 -- $ local d=2 00:17:50.482 05:07:01 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:17:50.482 05:07:01 -- scripts/common.sh@355 -- $ echo 2 00:17:50.482 05:07:01 -- scripts/common.sh@366 -- $ ver2[v]=2 00:17:50.482 05:07:01 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:17:50.482 05:07:01 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:17:50.482 05:07:01 -- scripts/common.sh@368 -- $ return 0 00:17:50.482 05:07:01 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.482 05:07:01 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:17:50.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.482 --rc genhtml_branch_coverage=1 00:17:50.482 --rc genhtml_function_coverage=1 00:17:50.482 --rc genhtml_legend=1 00:17:50.482 --rc geninfo_all_blocks=1 00:17:50.482 --rc geninfo_unexecuted_blocks=1 00:17:50.482 00:17:50.482 ' 00:17:50.482 05:07:01 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:17:50.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.482 --rc genhtml_branch_coverage=1 00:17:50.482 --rc genhtml_function_coverage=1 00:17:50.482 --rc genhtml_legend=1 00:17:50.482 --rc geninfo_all_blocks=1 00:17:50.482 --rc geninfo_unexecuted_blocks=1 00:17:50.482 00:17:50.482 ' 00:17:50.482 05:07:01 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:17:50.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.482 --rc genhtml_branch_coverage=1 00:17:50.482 --rc genhtml_function_coverage=1 00:17:50.482 --rc genhtml_legend=1 00:17:50.482 --rc geninfo_all_blocks=1 00:17:50.482 --rc geninfo_unexecuted_blocks=1 00:17:50.482 00:17:50.482 ' 00:17:50.482 05:07:01 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:17:50.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.482 --rc genhtml_branch_coverage=1 00:17:50.482 --rc genhtml_function_coverage=1 00:17:50.482 --rc genhtml_legend=1 00:17:50.482 --rc geninfo_all_blocks=1 00:17:50.482 --rc geninfo_unexecuted_blocks=1 00:17:50.482 00:17:50.482 ' 00:17:50.482 05:07:01 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:50.482 05:07:01 -- scripts/common.sh@15 -- $ shopt -s extglob 00:17:50.482 05:07:01 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:17:50.482 05:07:01 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.482 05:07:01 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.482 05:07:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.482 05:07:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.482 05:07:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.482 05:07:01 -- paths/export.sh@5 -- $ export PATH 00:17:50.482 05:07:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.482 05:07:01 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:17:50.482 05:07:01 -- common/autobuild_common.sh@479 -- $ date +%s 00:17:50.482 05:07:01 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1734152821.XXXXXX 00:17:50.482 05:07:01 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1734152821.pYJ7CT 00:17:50.482 05:07:01 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:17:50.482 05:07:01 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:17:50.482 05:07:01 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:17:50.482 05:07:01 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:17:50.482 05:07:01 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:17:50.482 05:07:01 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:17:50.482 05:07:01 -- common/autobuild_common.sh@495 -- $ get_config_params 00:17:50.482 05:07:01 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:17:50.482 05:07:01 -- common/autotest_common.sh@10 -- $ set +x 00:17:50.482 05:07:01 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:17:50.482 05:07:01 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:17:50.482 05:07:01 -- pm/common@17 -- $ local monitor 00:17:50.482 05:07:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:50.482 05:07:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:50.482 05:07:01 -- pm/common@25 -- $ sleep 1 00:17:50.482 05:07:01 -- pm/common@21 -- $ date +%s 00:17:50.482 05:07:01 -- pm/common@21 -- $ date +%s 00:17:50.482 05:07:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1734152821 00:17:50.482 05:07:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1734152821 00:17:50.482 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1734152821_collect-vmstat.pm.log 00:17:50.482 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1734152821_collect-cpu-load.pm.log 00:17:51.424 05:07:02 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:17:51.424 05:07:02 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:17:51.424 05:07:02 -- spdk/autopackage.sh@14 -- $ timing_finish 00:17:51.424 05:07:02 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:17:51.424 05:07:02 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:17:51.424 05:07:02 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:51.424 05:07:02 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:17:51.424 05:07:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:17:51.424 05:07:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:17:51.424 05:07:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:51.424 05:07:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:17:51.424 05:07:02 -- pm/common@44 -- $ pid=102500 00:17:51.424 05:07:02 -- pm/common@50 -- $ kill -TERM 102500 00:17:51.424 05:07:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:51.424 05:07:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:17:51.424 05:07:02 -- pm/common@44 -- $ pid=102502 00:17:51.424 05:07:02 -- pm/common@50 -- $ kill -TERM 102502 00:17:51.424 + [[ -n 6161 ]] 00:17:51.424 + sudo kill 6161 00:17:51.695 [Pipeline] } 00:17:51.711 [Pipeline] // timeout 00:17:51.716 [Pipeline] } 00:17:51.731 [Pipeline] // stage 00:17:51.736 [Pipeline] } 00:17:51.775 [Pipeline] // catchError 00:17:51.795 [Pipeline] stage 00:17:51.799 [Pipeline] { (Stop VM) 00:17:51.830 [Pipeline] sh 00:17:52.107 + vagrant halt 00:17:54.646 ==> default: Halting domain... 00:18:02.790 [Pipeline] sh 00:18:03.073 + vagrant destroy -f 00:18:05.612 ==> default: Removing domain... 00:18:05.625 [Pipeline] sh 00:18:05.911 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:18:05.921 [Pipeline] } 00:18:05.935 [Pipeline] // stage 00:18:05.940 [Pipeline] } 00:18:05.954 [Pipeline] // dir 00:18:05.959 [Pipeline] } 00:18:05.974 [Pipeline] // wrap 00:18:05.979 [Pipeline] } 00:18:05.992 [Pipeline] // catchError 00:18:06.001 [Pipeline] stage 00:18:06.003 [Pipeline] { (Epilogue) 00:18:06.015 [Pipeline] sh 00:18:06.314 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:18:10.561 [Pipeline] catchError 00:18:10.563 [Pipeline] { 00:18:10.576 [Pipeline] sh 00:18:10.862 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:18:10.862 Artifacts sizes are good 00:18:10.872 [Pipeline] } 00:18:10.886 [Pipeline] // catchError 00:18:10.897 [Pipeline] archiveArtifacts 00:18:10.905 Archiving artifacts 00:18:10.999 [Pipeline] cleanWs 00:18:11.011 [WS-CLEANUP] Deleting project workspace... 00:18:11.011 [WS-CLEANUP] Deferred wipeout is used... 00:18:11.019 [WS-CLEANUP] done 00:18:11.021 [Pipeline] } 00:18:11.035 [Pipeline] // stage 00:18:11.041 [Pipeline] } 00:18:11.054 [Pipeline] // node 00:18:11.061 [Pipeline] End of Pipeline 00:18:11.149 Finished: SUCCESS